00:00:00.001 Started by upstream project "autotest-per-patch" build number 126228 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.043 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.044 The recommended git tool is: git 00:00:00.044 using credential 00000000-0000-0000-0000-000000000002 00:00:00.046 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.064 Fetching changes from the remote Git repository 00:00:00.070 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.100 Using shallow fetch with depth 1 00:00:00.100 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.100 > git --version # timeout=10 00:00:00.146 > git --version # 'git version 2.39.2' 00:00:00.146 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.188 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.188 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.476 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.485 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.494 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:03.494 > git config core.sparsecheckout # timeout=10 00:00:03.504 > git read-tree -mu HEAD # timeout=10 00:00:03.523 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:03.544 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:03.544 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:03.614 [Pipeline] Start of Pipeline 00:00:03.626 [Pipeline] library 00:00:03.627 Loading library shm_lib@master 00:00:03.627 Library shm_lib@master is cached. Copying from home. 00:00:03.642 [Pipeline] node 00:00:03.650 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.651 [Pipeline] { 00:00:03.659 [Pipeline] catchError 00:00:03.661 [Pipeline] { 00:00:03.674 [Pipeline] wrap 00:00:03.683 [Pipeline] { 00:00:03.691 [Pipeline] stage 00:00:03.692 [Pipeline] { (Prologue) 00:00:03.842 [Pipeline] sh 00:00:04.125 + logger -p user.info -t JENKINS-CI 00:00:04.146 [Pipeline] echo 00:00:04.147 Node: CYP9 00:00:04.155 [Pipeline] sh 00:00:04.456 [Pipeline] setCustomBuildProperty 00:00:04.467 [Pipeline] echo 00:00:04.468 Cleanup processes 00:00:04.471 [Pipeline] sh 00:00:04.762 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.762 646415 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.777 [Pipeline] sh 00:00:05.067 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.067 ++ grep -v 'sudo pgrep' 00:00:05.067 ++ awk '{print $1}' 00:00:05.067 + sudo kill -9 00:00:05.067 + true 00:00:05.079 [Pipeline] cleanWs 00:00:05.088 [WS-CLEANUP] Deleting project workspace... 00:00:05.088 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.095 [WS-CLEANUP] done 00:00:05.098 [Pipeline] setCustomBuildProperty 00:00:05.108 [Pipeline] sh 00:00:05.390 + sudo git config --global --replace-all safe.directory '*' 00:00:05.456 [Pipeline] httpRequest 00:00:05.489 [Pipeline] echo 00:00:05.490 Sorcerer 10.211.164.101 is alive 00:00:05.498 [Pipeline] httpRequest 00:00:05.502 HttpMethod: GET 00:00:05.503 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.503 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.516 Response Code: HTTP/1.1 200 OK 00:00:05.516 Success: Status code 200 is in the accepted range: 200,404 00:00:05.517 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:09.941 [Pipeline] sh 00:00:10.255 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:10.273 [Pipeline] httpRequest 00:00:10.301 [Pipeline] echo 00:00:10.303 Sorcerer 10.211.164.101 is alive 00:00:10.311 [Pipeline] httpRequest 00:00:10.316 HttpMethod: GET 00:00:10.317 URL: http://10.211.164.101/packages/spdk_35c1e586ca34c8b8ca4571dc64406d099a4c7425.tar.gz 00:00:10.317 Sending request to url: http://10.211.164.101/packages/spdk_35c1e586ca34c8b8ca4571dc64406d099a4c7425.tar.gz 00:00:10.341 Response Code: HTTP/1.1 200 OK 00:00:10.342 Success: Status code 200 is in the accepted range: 200,404 00:00:10.342 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_35c1e586ca34c8b8ca4571dc64406d099a4c7425.tar.gz 00:01:11.371 [Pipeline] sh 00:01:11.658 + tar --no-same-owner -xf spdk_35c1e586ca34c8b8ca4571dc64406d099a4c7425.tar.gz 00:01:14.217 [Pipeline] sh 00:01:14.528 + git -C spdk log --oneline -n5 00:01:14.528 35c1e586c scripts/setup: Try to gracefully handle unsupported nic_uio devices 00:01:14.528 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:01:14.528 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:01:14.528 2d30d9f83 accel: introduce tasks in sequence limit 00:01:14.528 2728651ee accel: adjust task per ch define name 00:01:14.542 [Pipeline] } 00:01:14.561 [Pipeline] // stage 00:01:14.571 [Pipeline] stage 00:01:14.573 [Pipeline] { (Prepare) 00:01:14.591 [Pipeline] writeFile 00:01:14.608 [Pipeline] sh 00:01:14.890 + logger -p user.info -t JENKINS-CI 00:01:14.903 [Pipeline] sh 00:01:15.190 + logger -p user.info -t JENKINS-CI 00:01:15.201 [Pipeline] sh 00:01:15.483 + cat autorun-spdk.conf 00:01:15.483 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.483 SPDK_TEST_NVMF=1 00:01:15.483 SPDK_TEST_NVME_CLI=1 00:01:15.483 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.483 SPDK_TEST_NVMF_NICS=e810 00:01:15.483 SPDK_TEST_VFIOUSER=1 00:01:15.483 SPDK_RUN_UBSAN=1 00:01:15.483 NET_TYPE=phy 00:01:15.491 RUN_NIGHTLY=0 00:01:15.496 [Pipeline] readFile 00:01:15.521 [Pipeline] withEnv 00:01:15.523 [Pipeline] { 00:01:15.536 [Pipeline] sh 00:01:15.821 + set -ex 00:01:15.821 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:15.821 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.821 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.821 ++ SPDK_TEST_NVMF=1 00:01:15.821 ++ SPDK_TEST_NVME_CLI=1 00:01:15.821 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.821 ++ SPDK_TEST_NVMF_NICS=e810 00:01:15.821 ++ SPDK_TEST_VFIOUSER=1 00:01:15.821 ++ SPDK_RUN_UBSAN=1 00:01:15.821 ++ NET_TYPE=phy 00:01:15.821 ++ RUN_NIGHTLY=0 00:01:15.821 + case $SPDK_TEST_NVMF_NICS in 00:01:15.821 + DRIVERS=ice 00:01:15.821 + [[ tcp == \r\d\m\a ]] 00:01:15.821 + [[ -n ice ]] 00:01:15.821 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:15.821 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:15.821 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:15.821 rmmod: ERROR: Module irdma is not currently loaded 00:01:15.821 rmmod: ERROR: Module i40iw is not currently loaded 00:01:15.821 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:15.821 + true 00:01:15.821 + for D in $DRIVERS 00:01:15.821 + sudo modprobe ice 00:01:15.821 + exit 0 00:01:15.832 [Pipeline] } 00:01:15.854 [Pipeline] // withEnv 00:01:15.859 [Pipeline] } 00:01:15.877 [Pipeline] // stage 00:01:15.889 [Pipeline] catchError 00:01:15.891 [Pipeline] { 00:01:15.907 [Pipeline] timeout 00:01:15.907 Timeout set to expire in 50 min 00:01:15.909 [Pipeline] { 00:01:15.923 [Pipeline] stage 00:01:15.925 [Pipeline] { (Tests) 00:01:15.943 [Pipeline] sh 00:01:16.229 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.230 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.230 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.230 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:16.230 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.230 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:16.230 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:16.230 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:16.230 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:16.230 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:16.230 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:16.230 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.230 + source /etc/os-release 00:01:16.230 ++ NAME='Fedora Linux' 00:01:16.230 ++ VERSION='38 (Cloud Edition)' 00:01:16.230 ++ ID=fedora 00:01:16.230 ++ VERSION_ID=38 00:01:16.230 ++ VERSION_CODENAME= 00:01:16.230 ++ PLATFORM_ID=platform:f38 00:01:16.230 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:16.230 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:16.230 ++ LOGO=fedora-logo-icon 00:01:16.230 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:16.230 ++ HOME_URL=https://fedoraproject.org/ 00:01:16.230 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:16.230 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:16.230 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:16.230 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:16.230 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:16.230 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:16.230 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:16.230 ++ SUPPORT_END=2024-05-14 00:01:16.230 ++ VARIANT='Cloud Edition' 00:01:16.230 ++ VARIANT_ID=cloud 00:01:16.230 + uname -a 00:01:16.230 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:16.230 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:19.530 Hugepages 00:01:19.530 node hugesize free / total 00:01:19.530 node0 1048576kB 0 / 0 00:01:19.530 node0 2048kB 0 / 0 00:01:19.530 node1 1048576kB 0 / 0 00:01:19.530 node1 2048kB 0 / 0 00:01:19.530 00:01:19.530 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:19.530 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:19.530 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:19.530 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:19.530 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:19.530 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:19.530 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:19.530 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:19.530 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:19.530 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:19.530 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:19.530 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:19.530 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:19.530 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:19.530 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:19.530 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:19.530 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:19.530 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:19.530 + rm -f /tmp/spdk-ld-path 00:01:19.530 + source autorun-spdk.conf 00:01:19.530 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.530 ++ SPDK_TEST_NVMF=1 00:01:19.530 ++ SPDK_TEST_NVME_CLI=1 00:01:19.530 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.530 ++ SPDK_TEST_NVMF_NICS=e810 00:01:19.530 ++ SPDK_TEST_VFIOUSER=1 00:01:19.530 ++ SPDK_RUN_UBSAN=1 00:01:19.530 ++ NET_TYPE=phy 00:01:19.530 ++ RUN_NIGHTLY=0 00:01:19.530 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:19.530 + [[ -n '' ]] 00:01:19.530 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.530 + for M in /var/spdk/build-*-manifest.txt 00:01:19.530 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:19.530 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:19.530 + for M in /var/spdk/build-*-manifest.txt 00:01:19.530 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:19.530 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:19.530 ++ uname 00:01:19.530 + [[ Linux == \L\i\n\u\x ]] 00:01:19.530 + sudo dmesg -T 00:01:19.530 + sudo dmesg --clear 00:01:19.530 + dmesg_pid=647391 00:01:19.530 + [[ Fedora Linux == FreeBSD ]] 00:01:19.530 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.530 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.530 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:19.530 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:19.530 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:19.530 + [[ -x /usr/src/fio-static/fio ]] 00:01:19.530 + export FIO_BIN=/usr/src/fio-static/fio 00:01:19.530 + FIO_BIN=/usr/src/fio-static/fio 00:01:19.530 + sudo dmesg -Tw 00:01:19.530 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:19.530 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:19.530 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:19.530 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.530 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.530 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:19.530 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.530 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.530 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:19.530 Test configuration: 00:01:19.530 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.530 SPDK_TEST_NVMF=1 00:01:19.530 SPDK_TEST_NVME_CLI=1 00:01:19.530 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.530 SPDK_TEST_NVMF_NICS=e810 00:01:19.530 SPDK_TEST_VFIOUSER=1 00:01:19.530 SPDK_RUN_UBSAN=1 00:01:19.530 NET_TYPE=phy 00:01:19.530 RUN_NIGHTLY=0 19:56:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:19.530 19:56:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:19.530 19:56:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:19.530 19:56:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:19.530 19:56:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.530 19:56:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.530 19:56:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.530 19:56:16 -- paths/export.sh@5 -- $ export PATH 00:01:19.530 19:56:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.530 19:56:16 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:19.530 19:56:16 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:19.530 19:56:16 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721066176.XXXXXX 00:01:19.530 19:56:16 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721066176.BbLtOV 00:01:19.530 19:56:16 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:19.530 19:56:16 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:19.530 19:56:16 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:19.530 19:56:16 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:19.530 19:56:16 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:19.530 19:56:16 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:19.530 19:56:16 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:19.530 19:56:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.530 19:56:16 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:19.530 19:56:16 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:19.530 19:56:16 -- pm/common@17 -- $ local monitor 00:01:19.530 19:56:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.531 19:56:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.531 19:56:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.531 19:56:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.531 19:56:16 -- pm/common@21 -- $ date +%s 00:01:19.531 19:56:16 -- pm/common@25 -- $ sleep 1 00:01:19.531 19:56:16 -- pm/common@21 -- $ date +%s 00:01:19.531 19:56:16 -- pm/common@21 -- $ date +%s 00:01:19.531 19:56:16 -- pm/common@21 -- $ date +%s 00:01:19.531 19:56:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721066176 00:01:19.531 19:56:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721066176 00:01:19.531 19:56:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721066176 00:01:19.531 19:56:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721066176 00:01:19.531 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721066176_collect-vmstat.pm.log 00:01:19.531 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721066176_collect-cpu-load.pm.log 00:01:19.531 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721066176_collect-cpu-temp.pm.log 00:01:19.531 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721066176_collect-bmc-pm.bmc.pm.log 00:01:20.486 19:56:17 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:20.486 19:56:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:20.486 19:56:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:20.486 19:56:17 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.486 19:56:17 -- spdk/autobuild.sh@16 -- $ date -u 00:01:20.486 Mon Jul 15 05:56:17 PM UTC 2024 00:01:20.486 19:56:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:20.486 v24.09-pre-210-g35c1e586c 00:01:20.486 19:56:17 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:20.486 19:56:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:20.486 19:56:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:20.486 19:56:17 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:20.486 19:56:17 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:20.486 19:56:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.486 ************************************ 00:01:20.486 START TEST ubsan 00:01:20.486 ************************************ 00:01:20.486 19:56:17 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:20.486 using ubsan 00:01:20.486 00:01:20.486 real 0m0.000s 00:01:20.486 user 0m0.000s 00:01:20.486 sys 0m0.000s 00:01:20.486 19:56:17 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:20.486 19:56:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.486 ************************************ 00:01:20.486 END TEST ubsan 00:01:20.486 ************************************ 00:01:20.486 19:56:17 -- common/autotest_common.sh@1142 -- $ return 0 00:01:20.486 19:56:17 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:20.486 19:56:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:20.486 19:56:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:20.486 19:56:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:20.486 19:56:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:20.486 19:56:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:20.486 19:56:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:20.486 19:56:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:20.486 19:56:17 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:20.746 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:20.746 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:21.315 Using 'verbs' RDMA provider 00:01:37.171 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:49.409 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:49.409 Creating mk/config.mk...done. 00:01:49.409 Creating mk/cc.flags.mk...done. 00:01:49.409 Type 'make' to build. 00:01:49.409 19:56:46 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:49.409 19:56:46 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:49.409 19:56:46 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:49.409 19:56:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.409 ************************************ 00:01:49.409 START TEST make 00:01:49.409 ************************************ 00:01:49.409 19:56:46 make -- common/autotest_common.sh@1123 -- $ make -j144 00:01:49.409 make[1]: Nothing to be done for 'all'. 00:01:50.787 The Meson build system 00:01:50.787 Version: 1.3.1 00:01:50.787 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:50.787 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:50.787 Build type: native build 00:01:50.787 Project name: libvfio-user 00:01:50.787 Project version: 0.0.1 00:01:50.787 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:50.787 C linker for the host machine: cc ld.bfd 2.39-16 00:01:50.787 Host machine cpu family: x86_64 00:01:50.787 Host machine cpu: x86_64 00:01:50.787 Run-time dependency threads found: YES 00:01:50.787 Library dl found: YES 00:01:50.787 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:50.787 Run-time dependency json-c found: YES 0.17 00:01:50.787 Run-time dependency cmocka found: YES 1.1.7 00:01:50.787 Program pytest-3 found: NO 00:01:50.787 Program flake8 found: NO 00:01:50.787 Program misspell-fixer found: NO 00:01:50.787 Program restructuredtext-lint found: NO 00:01:50.787 Program valgrind found: YES (/usr/bin/valgrind) 00:01:50.787 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:50.787 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:50.787 Compiler for C supports arguments -Wwrite-strings: YES 00:01:50.787 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:50.787 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:50.787 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:50.787 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:50.787 Build targets in project: 8 00:01:50.787 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:50.787 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:50.787 00:01:50.787 libvfio-user 0.0.1 00:01:50.787 00:01:50.787 User defined options 00:01:50.787 buildtype : debug 00:01:50.787 default_library: shared 00:01:50.787 libdir : /usr/local/lib 00:01:50.787 00:01:50.787 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:50.787 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:51.044 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:51.044 [2/37] Compiling C object samples/null.p/null.c.o 00:01:51.044 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:51.044 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:51.044 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:51.044 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:51.044 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:51.044 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:51.044 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:51.045 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:51.045 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:51.045 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:51.045 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:51.045 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:51.045 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:51.045 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:51.045 [17/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:51.045 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:51.045 [19/37] Compiling C object samples/server.p/server.c.o 00:01:51.045 [20/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:51.045 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:51.045 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:51.045 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:51.045 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:51.045 [25/37] Compiling C object samples/client.p/client.c.o 00:01:51.045 [26/37] Linking target samples/client 00:01:51.045 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:51.045 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:51.045 [29/37] Linking target test/unit_tests 00:01:51.045 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:51.045 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:51.303 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:51.303 [33/37] Linking target samples/gpio-pci-idio-16 00:01:51.303 [34/37] Linking target samples/lspci 00:01:51.303 [35/37] Linking target samples/server 00:01:51.303 [36/37] Linking target samples/null 00:01:51.303 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:51.303 INFO: autodetecting backend as ninja 00:01:51.303 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:51.563 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:51.823 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:51.823 ninja: no work to do. 00:01:58.407 The Meson build system 00:01:58.407 Version: 1.3.1 00:01:58.407 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:58.407 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:58.407 Build type: native build 00:01:58.407 Program cat found: YES (/usr/bin/cat) 00:01:58.407 Project name: DPDK 00:01:58.407 Project version: 24.03.0 00:01:58.407 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:58.407 C linker for the host machine: cc ld.bfd 2.39-16 00:01:58.407 Host machine cpu family: x86_64 00:01:58.407 Host machine cpu: x86_64 00:01:58.407 Message: ## Building in Developer Mode ## 00:01:58.407 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:58.407 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:58.407 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:58.407 Program python3 found: YES (/usr/bin/python3) 00:01:58.407 Program cat found: YES (/usr/bin/cat) 00:01:58.407 Compiler for C supports arguments -march=native: YES 00:01:58.407 Checking for size of "void *" : 8 00:01:58.407 Checking for size of "void *" : 8 (cached) 00:01:58.407 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:58.407 Library m found: YES 00:01:58.407 Library numa found: YES 00:01:58.407 Has header "numaif.h" : YES 00:01:58.407 Library fdt found: NO 00:01:58.407 Library execinfo found: NO 00:01:58.407 Has header "execinfo.h" : YES 00:01:58.407 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:58.407 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:58.407 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:58.407 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:58.407 Run-time dependency openssl found: YES 3.0.9 00:01:58.407 Run-time dependency libpcap found: YES 1.10.4 00:01:58.407 Has header "pcap.h" with dependency libpcap: YES 00:01:58.408 Compiler for C supports arguments -Wcast-qual: YES 00:01:58.408 Compiler for C supports arguments -Wdeprecated: YES 00:01:58.408 Compiler for C supports arguments -Wformat: YES 00:01:58.408 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:58.408 Compiler for C supports arguments -Wformat-security: NO 00:01:58.408 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:58.408 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:58.408 Compiler for C supports arguments -Wnested-externs: YES 00:01:58.408 Compiler for C supports arguments -Wold-style-definition: YES 00:01:58.408 Compiler for C supports arguments -Wpointer-arith: YES 00:01:58.408 Compiler for C supports arguments -Wsign-compare: YES 00:01:58.408 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:58.408 Compiler for C supports arguments -Wundef: YES 00:01:58.408 Compiler for C supports arguments -Wwrite-strings: YES 00:01:58.408 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:58.408 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:58.408 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:58.408 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:58.408 Program objdump found: YES (/usr/bin/objdump) 00:01:58.408 Compiler for C supports arguments -mavx512f: YES 00:01:58.408 Checking if "AVX512 checking" compiles: YES 00:01:58.408 Fetching value of define "__SSE4_2__" : 1 00:01:58.408 Fetching value of define "__AES__" : 1 00:01:58.408 Fetching value of define "__AVX__" : 1 00:01:58.408 Fetching value of define "__AVX2__" : 1 00:01:58.408 Fetching value of define "__AVX512BW__" : 1 00:01:58.408 Fetching value of define "__AVX512CD__" : 1 00:01:58.408 Fetching value of define "__AVX512DQ__" : 1 00:01:58.408 Fetching value of define "__AVX512F__" : 1 00:01:58.408 Fetching value of define "__AVX512VL__" : 1 00:01:58.408 Fetching value of define "__PCLMUL__" : 1 00:01:58.408 Fetching value of define "__RDRND__" : 1 00:01:58.408 Fetching value of define "__RDSEED__" : 1 00:01:58.408 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:58.408 Fetching value of define "__znver1__" : (undefined) 00:01:58.408 Fetching value of define "__znver2__" : (undefined) 00:01:58.408 Fetching value of define "__znver3__" : (undefined) 00:01:58.408 Fetching value of define "__znver4__" : (undefined) 00:01:58.408 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:58.408 Message: lib/log: Defining dependency "log" 00:01:58.408 Message: lib/kvargs: Defining dependency "kvargs" 00:01:58.408 Message: lib/telemetry: Defining dependency "telemetry" 00:01:58.408 Checking for function "getentropy" : NO 00:01:58.408 Message: lib/eal: Defining dependency "eal" 00:01:58.408 Message: lib/ring: Defining dependency "ring" 00:01:58.408 Message: lib/rcu: Defining dependency "rcu" 00:01:58.408 Message: lib/mempool: Defining dependency "mempool" 00:01:58.408 Message: lib/mbuf: Defining dependency "mbuf" 00:01:58.408 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:58.408 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:58.408 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:58.408 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:58.408 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:58.408 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:58.408 Compiler for C supports arguments -mpclmul: YES 00:01:58.408 Compiler for C supports arguments -maes: YES 00:01:58.408 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:58.408 Compiler for C supports arguments -mavx512bw: YES 00:01:58.408 Compiler for C supports arguments -mavx512dq: YES 00:01:58.408 Compiler for C supports arguments -mavx512vl: YES 00:01:58.408 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:58.408 Compiler for C supports arguments -mavx2: YES 00:01:58.408 Compiler for C supports arguments -mavx: YES 00:01:58.408 Message: lib/net: Defining dependency "net" 00:01:58.408 Message: lib/meter: Defining dependency "meter" 00:01:58.408 Message: lib/ethdev: Defining dependency "ethdev" 00:01:58.408 Message: lib/pci: Defining dependency "pci" 00:01:58.408 Message: lib/cmdline: Defining dependency "cmdline" 00:01:58.408 Message: lib/hash: Defining dependency "hash" 00:01:58.408 Message: lib/timer: Defining dependency "timer" 00:01:58.408 Message: lib/compressdev: Defining dependency "compressdev" 00:01:58.408 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:58.408 Message: lib/dmadev: Defining dependency "dmadev" 00:01:58.408 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:58.408 Message: lib/power: Defining dependency "power" 00:01:58.408 Message: lib/reorder: Defining dependency "reorder" 00:01:58.408 Message: lib/security: Defining dependency "security" 00:01:58.408 Has header "linux/userfaultfd.h" : YES 00:01:58.408 Has header "linux/vduse.h" : YES 00:01:58.408 Message: lib/vhost: Defining dependency "vhost" 00:01:58.408 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:58.408 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:58.408 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:58.408 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:58.408 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:58.408 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:58.408 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:58.408 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:58.408 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:58.408 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:58.408 Program doxygen found: YES (/usr/bin/doxygen) 00:01:58.408 Configuring doxy-api-html.conf using configuration 00:01:58.408 Configuring doxy-api-man.conf using configuration 00:01:58.408 Program mandb found: YES (/usr/bin/mandb) 00:01:58.408 Program sphinx-build found: NO 00:01:58.408 Configuring rte_build_config.h using configuration 00:01:58.408 Message: 00:01:58.408 ================= 00:01:58.408 Applications Enabled 00:01:58.408 ================= 00:01:58.408 00:01:58.408 apps: 00:01:58.408 00:01:58.408 00:01:58.408 Message: 00:01:58.408 ================= 00:01:58.408 Libraries Enabled 00:01:58.408 ================= 00:01:58.408 00:01:58.408 libs: 00:01:58.408 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:58.408 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:58.408 cryptodev, dmadev, power, reorder, security, vhost, 00:01:58.408 00:01:58.408 Message: 00:01:58.408 =============== 00:01:58.408 Drivers Enabled 00:01:58.408 =============== 00:01:58.408 00:01:58.408 common: 00:01:58.408 00:01:58.408 bus: 00:01:58.408 pci, vdev, 00:01:58.408 mempool: 00:01:58.408 ring, 00:01:58.408 dma: 00:01:58.408 00:01:58.408 net: 00:01:58.408 00:01:58.408 crypto: 00:01:58.408 00:01:58.408 compress: 00:01:58.408 00:01:58.408 vdpa: 00:01:58.408 00:01:58.408 00:01:58.408 Message: 00:01:58.408 ================= 00:01:58.408 Content Skipped 00:01:58.408 ================= 00:01:58.408 00:01:58.408 apps: 00:01:58.408 dumpcap: explicitly disabled via build config 00:01:58.408 graph: explicitly disabled via build config 00:01:58.408 pdump: explicitly disabled via build config 00:01:58.408 proc-info: explicitly disabled via build config 00:01:58.408 test-acl: explicitly disabled via build config 00:01:58.408 test-bbdev: explicitly disabled via build config 00:01:58.408 test-cmdline: explicitly disabled via build config 00:01:58.408 test-compress-perf: explicitly disabled via build config 00:01:58.408 test-crypto-perf: explicitly disabled via build config 00:01:58.408 test-dma-perf: explicitly disabled via build config 00:01:58.408 test-eventdev: explicitly disabled via build config 00:01:58.408 test-fib: explicitly disabled via build config 00:01:58.408 test-flow-perf: explicitly disabled via build config 00:01:58.408 test-gpudev: explicitly disabled via build config 00:01:58.408 test-mldev: explicitly disabled via build config 00:01:58.408 test-pipeline: explicitly disabled via build config 00:01:58.408 test-pmd: explicitly disabled via build config 00:01:58.408 test-regex: explicitly disabled via build config 00:01:58.408 test-sad: explicitly disabled via build config 00:01:58.408 test-security-perf: explicitly disabled via build config 00:01:58.408 00:01:58.408 libs: 00:01:58.408 argparse: explicitly disabled via build config 00:01:58.408 metrics: explicitly disabled via build config 00:01:58.408 acl: explicitly disabled via build config 00:01:58.408 bbdev: explicitly disabled via build config 00:01:58.408 bitratestats: explicitly disabled via build config 00:01:58.408 bpf: explicitly disabled via build config 00:01:58.408 cfgfile: explicitly disabled via build config 00:01:58.408 distributor: explicitly disabled via build config 00:01:58.408 efd: explicitly disabled via build config 00:01:58.408 eventdev: explicitly disabled via build config 00:01:58.408 dispatcher: explicitly disabled via build config 00:01:58.408 gpudev: explicitly disabled via build config 00:01:58.408 gro: explicitly disabled via build config 00:01:58.408 gso: explicitly disabled via build config 00:01:58.408 ip_frag: explicitly disabled via build config 00:01:58.408 jobstats: explicitly disabled via build config 00:01:58.408 latencystats: explicitly disabled via build config 00:01:58.408 lpm: explicitly disabled via build config 00:01:58.408 member: explicitly disabled via build config 00:01:58.408 pcapng: explicitly disabled via build config 00:01:58.408 rawdev: explicitly disabled via build config 00:01:58.408 regexdev: explicitly disabled via build config 00:01:58.408 mldev: explicitly disabled via build config 00:01:58.408 rib: explicitly disabled via build config 00:01:58.408 sched: explicitly disabled via build config 00:01:58.408 stack: explicitly disabled via build config 00:01:58.408 ipsec: explicitly disabled via build config 00:01:58.408 pdcp: explicitly disabled via build config 00:01:58.408 fib: explicitly disabled via build config 00:01:58.408 port: explicitly disabled via build config 00:01:58.408 pdump: explicitly disabled via build config 00:01:58.408 table: explicitly disabled via build config 00:01:58.408 pipeline: explicitly disabled via build config 00:01:58.408 graph: explicitly disabled via build config 00:01:58.408 node: explicitly disabled via build config 00:01:58.408 00:01:58.408 drivers: 00:01:58.408 common/cpt: not in enabled drivers build config 00:01:58.408 common/dpaax: not in enabled drivers build config 00:01:58.408 common/iavf: not in enabled drivers build config 00:01:58.408 common/idpf: not in enabled drivers build config 00:01:58.408 common/ionic: not in enabled drivers build config 00:01:58.408 common/mvep: not in enabled drivers build config 00:01:58.408 common/octeontx: not in enabled drivers build config 00:01:58.408 bus/auxiliary: not in enabled drivers build config 00:01:58.408 bus/cdx: not in enabled drivers build config 00:01:58.408 bus/dpaa: not in enabled drivers build config 00:01:58.409 bus/fslmc: not in enabled drivers build config 00:01:58.409 bus/ifpga: not in enabled drivers build config 00:01:58.409 bus/platform: not in enabled drivers build config 00:01:58.409 bus/uacce: not in enabled drivers build config 00:01:58.409 bus/vmbus: not in enabled drivers build config 00:01:58.409 common/cnxk: not in enabled drivers build config 00:01:58.409 common/mlx5: not in enabled drivers build config 00:01:58.409 common/nfp: not in enabled drivers build config 00:01:58.409 common/nitrox: not in enabled drivers build config 00:01:58.409 common/qat: not in enabled drivers build config 00:01:58.409 common/sfc_efx: not in enabled drivers build config 00:01:58.409 mempool/bucket: not in enabled drivers build config 00:01:58.409 mempool/cnxk: not in enabled drivers build config 00:01:58.409 mempool/dpaa: not in enabled drivers build config 00:01:58.409 mempool/dpaa2: not in enabled drivers build config 00:01:58.409 mempool/octeontx: not in enabled drivers build config 00:01:58.409 mempool/stack: not in enabled drivers build config 00:01:58.409 dma/cnxk: not in enabled drivers build config 00:01:58.409 dma/dpaa: not in enabled drivers build config 00:01:58.409 dma/dpaa2: not in enabled drivers build config 00:01:58.409 dma/hisilicon: not in enabled drivers build config 00:01:58.409 dma/idxd: not in enabled drivers build config 00:01:58.409 dma/ioat: not in enabled drivers build config 00:01:58.409 dma/skeleton: not in enabled drivers build config 00:01:58.409 net/af_packet: not in enabled drivers build config 00:01:58.409 net/af_xdp: not in enabled drivers build config 00:01:58.409 net/ark: not in enabled drivers build config 00:01:58.409 net/atlantic: not in enabled drivers build config 00:01:58.409 net/avp: not in enabled drivers build config 00:01:58.409 net/axgbe: not in enabled drivers build config 00:01:58.409 net/bnx2x: not in enabled drivers build config 00:01:58.409 net/bnxt: not in enabled drivers build config 00:01:58.409 net/bonding: not in enabled drivers build config 00:01:58.409 net/cnxk: not in enabled drivers build config 00:01:58.409 net/cpfl: not in enabled drivers build config 00:01:58.409 net/cxgbe: not in enabled drivers build config 00:01:58.409 net/dpaa: not in enabled drivers build config 00:01:58.409 net/dpaa2: not in enabled drivers build config 00:01:58.409 net/e1000: not in enabled drivers build config 00:01:58.409 net/ena: not in enabled drivers build config 00:01:58.409 net/enetc: not in enabled drivers build config 00:01:58.409 net/enetfec: not in enabled drivers build config 00:01:58.409 net/enic: not in enabled drivers build config 00:01:58.409 net/failsafe: not in enabled drivers build config 00:01:58.409 net/fm10k: not in enabled drivers build config 00:01:58.409 net/gve: not in enabled drivers build config 00:01:58.409 net/hinic: not in enabled drivers build config 00:01:58.409 net/hns3: not in enabled drivers build config 00:01:58.409 net/i40e: not in enabled drivers build config 00:01:58.409 net/iavf: not in enabled drivers build config 00:01:58.409 net/ice: not in enabled drivers build config 00:01:58.409 net/idpf: not in enabled drivers build config 00:01:58.409 net/igc: not in enabled drivers build config 00:01:58.409 net/ionic: not in enabled drivers build config 00:01:58.409 net/ipn3ke: not in enabled drivers build config 00:01:58.409 net/ixgbe: not in enabled drivers build config 00:01:58.409 net/mana: not in enabled drivers build config 00:01:58.409 net/memif: not in enabled drivers build config 00:01:58.409 net/mlx4: not in enabled drivers build config 00:01:58.409 net/mlx5: not in enabled drivers build config 00:01:58.409 net/mvneta: not in enabled drivers build config 00:01:58.409 net/mvpp2: not in enabled drivers build config 00:01:58.409 net/netvsc: not in enabled drivers build config 00:01:58.409 net/nfb: not in enabled drivers build config 00:01:58.409 net/nfp: not in enabled drivers build config 00:01:58.409 net/ngbe: not in enabled drivers build config 00:01:58.409 net/null: not in enabled drivers build config 00:01:58.409 net/octeontx: not in enabled drivers build config 00:01:58.409 net/octeon_ep: not in enabled drivers build config 00:01:58.409 net/pcap: not in enabled drivers build config 00:01:58.409 net/pfe: not in enabled drivers build config 00:01:58.409 net/qede: not in enabled drivers build config 00:01:58.409 net/ring: not in enabled drivers build config 00:01:58.409 net/sfc: not in enabled drivers build config 00:01:58.409 net/softnic: not in enabled drivers build config 00:01:58.409 net/tap: not in enabled drivers build config 00:01:58.409 net/thunderx: not in enabled drivers build config 00:01:58.409 net/txgbe: not in enabled drivers build config 00:01:58.409 net/vdev_netvsc: not in enabled drivers build config 00:01:58.409 net/vhost: not in enabled drivers build config 00:01:58.409 net/virtio: not in enabled drivers build config 00:01:58.409 net/vmxnet3: not in enabled drivers build config 00:01:58.409 raw/*: missing internal dependency, "rawdev" 00:01:58.409 crypto/armv8: not in enabled drivers build config 00:01:58.409 crypto/bcmfs: not in enabled drivers build config 00:01:58.409 crypto/caam_jr: not in enabled drivers build config 00:01:58.409 crypto/ccp: not in enabled drivers build config 00:01:58.409 crypto/cnxk: not in enabled drivers build config 00:01:58.409 crypto/dpaa_sec: not in enabled drivers build config 00:01:58.409 crypto/dpaa2_sec: not in enabled drivers build config 00:01:58.409 crypto/ipsec_mb: not in enabled drivers build config 00:01:58.409 crypto/mlx5: not in enabled drivers build config 00:01:58.409 crypto/mvsam: not in enabled drivers build config 00:01:58.409 crypto/nitrox: not in enabled drivers build config 00:01:58.409 crypto/null: not in enabled drivers build config 00:01:58.409 crypto/octeontx: not in enabled drivers build config 00:01:58.409 crypto/openssl: not in enabled drivers build config 00:01:58.409 crypto/scheduler: not in enabled drivers build config 00:01:58.409 crypto/uadk: not in enabled drivers build config 00:01:58.409 crypto/virtio: not in enabled drivers build config 00:01:58.409 compress/isal: not in enabled drivers build config 00:01:58.409 compress/mlx5: not in enabled drivers build config 00:01:58.409 compress/nitrox: not in enabled drivers build config 00:01:58.409 compress/octeontx: not in enabled drivers build config 00:01:58.409 compress/zlib: not in enabled drivers build config 00:01:58.409 regex/*: missing internal dependency, "regexdev" 00:01:58.409 ml/*: missing internal dependency, "mldev" 00:01:58.409 vdpa/ifc: not in enabled drivers build config 00:01:58.409 vdpa/mlx5: not in enabled drivers build config 00:01:58.409 vdpa/nfp: not in enabled drivers build config 00:01:58.409 vdpa/sfc: not in enabled drivers build config 00:01:58.409 event/*: missing internal dependency, "eventdev" 00:01:58.409 baseband/*: missing internal dependency, "bbdev" 00:01:58.409 gpu/*: missing internal dependency, "gpudev" 00:01:58.409 00:01:58.409 00:01:58.409 Build targets in project: 84 00:01:58.409 00:01:58.409 DPDK 24.03.0 00:01:58.409 00:01:58.409 User defined options 00:01:58.409 buildtype : debug 00:01:58.409 default_library : shared 00:01:58.409 libdir : lib 00:01:58.409 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:58.409 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:58.409 c_link_args : 00:01:58.409 cpu_instruction_set: native 00:01:58.409 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:58.409 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:58.409 enable_docs : false 00:01:58.409 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:58.409 enable_kmods : false 00:01:58.409 max_lcores : 128 00:01:58.409 tests : false 00:01:58.409 00:01:58.409 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:58.409 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:58.409 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:58.409 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:58.409 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:58.409 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:58.409 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:58.409 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:58.409 [7/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:58.409 [8/267] Linking static target lib/librte_kvargs.a 00:01:58.409 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:58.409 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:58.409 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:58.409 [12/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:58.409 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:58.409 [14/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:58.409 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:58.409 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:58.409 [17/267] Linking static target lib/librte_log.a 00:01:58.409 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:58.409 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:58.409 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:58.409 [21/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:58.409 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:58.409 [23/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:58.409 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:58.409 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:58.409 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:58.409 [27/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:58.409 [28/267] Linking static target lib/librte_pci.a 00:01:58.409 [29/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:58.409 [30/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:58.409 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:58.409 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:58.409 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:58.409 [34/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:58.409 [35/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:58.668 [36/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:58.668 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:58.668 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:58.668 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:58.668 [40/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:58.668 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:58.668 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:58.668 [43/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:58.668 [44/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:58.668 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:58.668 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:58.668 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:58.669 [48/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:58.669 [49/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.669 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:58.669 [51/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:58.669 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:58.669 [53/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.669 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:58.669 [55/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:58.669 [56/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:58.669 [57/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:58.669 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:58.669 [59/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:58.669 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:58.669 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:58.669 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:58.669 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:58.669 [64/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:58.669 [65/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:58.669 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:58.669 [67/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:58.669 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:58.669 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:58.669 [70/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:58.669 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:58.669 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:58.669 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:58.669 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:58.669 [75/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:58.669 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:58.669 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:58.669 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:58.929 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:58.929 [80/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:58.929 [81/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:58.929 [82/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:58.929 [83/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:58.929 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:58.929 [85/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:58.929 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:58.929 [87/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:58.929 [88/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:58.929 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:58.929 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:58.929 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:58.929 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:58.929 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:58.929 [94/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:58.929 [95/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:58.929 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:58.929 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:58.929 [98/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:58.929 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:58.929 [100/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:58.929 [101/267] Linking static target lib/librte_ring.a 00:01:58.929 [102/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:58.929 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:58.929 [104/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:58.929 [105/267] Linking static target lib/librte_telemetry.a 00:01:58.929 [106/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:58.929 [107/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:58.929 [108/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:58.929 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:58.929 [110/267] Linking static target lib/librte_mempool.a 00:01:58.929 [111/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:58.929 [112/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:58.929 [113/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:58.929 [114/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:58.929 [115/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:58.929 [116/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:58.929 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:58.929 [118/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:58.929 [119/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:58.929 [120/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:58.929 [121/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:58.929 [122/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:58.929 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:58.930 [124/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:58.930 [125/267] Linking static target lib/librte_meter.a 00:01:58.930 [126/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:58.930 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:58.930 [128/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:58.930 [129/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:58.930 [130/267] Linking static target lib/librte_cmdline.a 00:01:58.930 [131/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:58.930 [132/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:58.930 [133/267] Linking static target lib/librte_timer.a 00:01:58.930 [134/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:58.930 [135/267] Linking static target lib/librte_net.a 00:01:58.930 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:58.930 [137/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:58.930 [138/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:58.930 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:58.930 [140/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:58.930 [141/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:58.930 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:58.930 [143/267] Linking static target lib/librte_compressdev.a 00:01:58.930 [144/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:58.930 [145/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:58.930 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:58.930 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:58.930 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:58.930 [149/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:58.930 [150/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.930 [151/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:58.930 [152/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:58.930 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:58.930 [154/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:58.930 [155/267] Linking static target lib/librte_rcu.a 00:01:58.930 [156/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:58.930 [157/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:58.930 [158/267] Linking static target lib/librte_dmadev.a 00:01:58.930 [159/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:58.930 [160/267] Linking static target lib/librte_reorder.a 00:01:58.930 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:58.930 [162/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:58.930 [163/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:58.930 [164/267] Linking static target lib/librte_power.a 00:01:58.930 [165/267] Linking target lib/librte_log.so.24.1 00:01:58.930 [166/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:58.930 [167/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:58.930 [168/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:58.930 [169/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:58.930 [170/267] Linking static target lib/librte_security.a 00:01:58.930 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:58.930 [172/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.930 [173/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.930 [174/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:58.930 [175/267] Linking static target drivers/librte_bus_vdev.a 00:01:58.930 [176/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:58.930 [177/267] Linking static target lib/librte_eal.a 00:01:58.930 [178/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:58.930 [179/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:58.930 [180/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:58.930 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:58.930 [182/267] Linking static target lib/librte_mbuf.a 00:01:59.191 [183/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:59.191 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:59.191 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:59.191 [186/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:59.191 [187/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:59.191 [188/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:59.191 [189/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:59.191 [190/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.191 [191/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.191 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:59.191 [193/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.191 [194/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.191 [195/267] Linking static target drivers/librte_mempool_ring.a 00:01:59.191 [196/267] Linking target lib/librte_kvargs.so.24.1 00:01:59.191 [197/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:59.191 [198/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.191 [199/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:59.191 [200/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.191 [201/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.191 [202/267] Linking static target lib/librte_hash.a 00:01:59.191 [203/267] Linking static target drivers/librte_bus_pci.a 00:01:59.452 [204/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:59.452 [205/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:59.452 [206/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.452 [207/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:59.452 [208/267] Linking static target lib/librte_cryptodev.a 00:01:59.452 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.452 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.452 [211/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.452 [212/267] Linking target lib/librte_telemetry.so.24.1 00:01:59.452 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.713 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:59.713 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.713 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.713 [217/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.713 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:59.713 [219/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:59.713 [220/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.714 [221/267] Linking static target lib/librte_ethdev.a 00:01:59.975 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.975 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.975 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.235 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.235 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.528 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:00.528 [228/267] Linking static target lib/librte_vhost.a 00:02:01.473 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.861 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.450 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.836 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.837 [233/267] Linking target lib/librte_eal.so.24.1 00:02:10.837 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:11.098 [235/267] Linking target lib/librte_pci.so.24.1 00:02:11.098 [236/267] Linking target lib/librte_ring.so.24.1 00:02:11.098 [237/267] Linking target lib/librte_timer.so.24.1 00:02:11.098 [238/267] Linking target lib/librte_meter.so.24.1 00:02:11.098 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:11.098 [240/267] Linking target lib/librte_dmadev.so.24.1 00:02:11.098 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:11.098 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:11.098 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:11.098 [244/267] Linking target lib/librte_mempool.so.24.1 00:02:11.098 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:11.098 [246/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:11.098 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:11.098 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:11.098 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:11.358 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:11.358 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:11.358 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:11.358 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:11.358 [254/267] Linking target lib/librte_compressdev.so.24.1 00:02:11.358 [255/267] Linking target lib/librte_net.so.24.1 00:02:11.358 [256/267] Linking target lib/librte_reorder.so.24.1 00:02:11.358 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:11.620 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:11.620 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:11.620 [260/267] Linking target lib/librte_hash.so.24.1 00:02:11.620 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:11.620 [262/267] Linking target lib/librte_security.so.24.1 00:02:11.620 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:11.882 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:11.882 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:11.882 [266/267] Linking target lib/librte_power.so.24.1 00:02:11.882 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:11.882 INFO: autodetecting backend as ninja 00:02:11.882 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:13.271 CC lib/log/log.o 00:02:13.271 CC lib/log/log_flags.o 00:02:13.271 CC lib/log/log_deprecated.o 00:02:13.271 CC lib/ut/ut.o 00:02:13.271 CC lib/ut_mock/mock.o 00:02:13.271 LIB libspdk_log.a 00:02:13.271 LIB libspdk_ut_mock.a 00:02:13.271 LIB libspdk_ut.a 00:02:13.271 SO libspdk_log.so.7.0 00:02:13.271 SO libspdk_ut_mock.so.6.0 00:02:13.271 SO libspdk_ut.so.2.0 00:02:13.271 SYMLINK libspdk_ut_mock.so 00:02:13.271 SYMLINK libspdk_log.so 00:02:13.271 SYMLINK libspdk_ut.so 00:02:13.532 CXX lib/trace_parser/trace.o 00:02:13.532 CC lib/dma/dma.o 00:02:13.532 CC lib/ioat/ioat.o 00:02:13.532 CC lib/util/base64.o 00:02:13.532 CC lib/util/bit_array.o 00:02:13.532 CC lib/util/cpuset.o 00:02:13.532 CC lib/util/crc16.o 00:02:13.532 CC lib/util/crc32.o 00:02:13.532 CC lib/util/crc32c.o 00:02:13.532 CC lib/util/crc32_ieee.o 00:02:13.532 CC lib/util/fd.o 00:02:13.532 CC lib/util/crc64.o 00:02:13.532 CC lib/util/dif.o 00:02:13.532 CC lib/util/file.o 00:02:13.532 CC lib/util/hexlify.o 00:02:13.532 CC lib/util/iov.o 00:02:13.792 CC lib/util/math.o 00:02:13.792 CC lib/util/pipe.o 00:02:13.792 CC lib/util/strerror_tls.o 00:02:13.792 CC lib/util/string.o 00:02:13.792 CC lib/util/uuid.o 00:02:13.792 CC lib/util/fd_group.o 00:02:13.792 CC lib/util/xor.o 00:02:13.793 CC lib/util/zipf.o 00:02:13.793 CC lib/vfio_user/host/vfio_user_pci.o 00:02:13.793 CC lib/vfio_user/host/vfio_user.o 00:02:13.793 LIB libspdk_dma.a 00:02:13.793 SO libspdk_dma.so.4.0 00:02:14.053 SYMLINK libspdk_dma.so 00:02:14.053 LIB libspdk_ioat.a 00:02:14.053 SO libspdk_ioat.so.7.0 00:02:14.053 SYMLINK libspdk_ioat.so 00:02:14.053 LIB libspdk_vfio_user.a 00:02:14.053 SO libspdk_vfio_user.so.5.0 00:02:14.053 LIB libspdk_util.a 00:02:14.053 SYMLINK libspdk_vfio_user.so 00:02:14.314 SO libspdk_util.so.9.1 00:02:14.314 SYMLINK libspdk_util.so 00:02:14.314 LIB libspdk_trace_parser.a 00:02:14.575 SO libspdk_trace_parser.so.5.0 00:02:14.575 SYMLINK libspdk_trace_parser.so 00:02:14.575 CC lib/json/json_parse.o 00:02:14.575 CC lib/json/json_util.o 00:02:14.575 CC lib/json/json_write.o 00:02:14.575 CC lib/vmd/vmd.o 00:02:14.575 CC lib/rdma_utils/rdma_utils.o 00:02:14.575 CC lib/vmd/led.o 00:02:14.575 CC lib/rdma_provider/common.o 00:02:14.575 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:14.575 CC lib/env_dpdk/env.o 00:02:14.575 CC lib/env_dpdk/memory.o 00:02:14.575 CC lib/idxd/idxd.o 00:02:14.575 CC lib/idxd/idxd_kernel.o 00:02:14.575 CC lib/env_dpdk/pci.o 00:02:14.575 CC lib/idxd/idxd_user.o 00:02:14.575 CC lib/env_dpdk/init.o 00:02:14.575 CC lib/env_dpdk/threads.o 00:02:14.575 CC lib/conf/conf.o 00:02:14.575 CC lib/env_dpdk/pci_ioat.o 00:02:14.575 CC lib/env_dpdk/pci_virtio.o 00:02:14.575 CC lib/env_dpdk/pci_vmd.o 00:02:14.575 CC lib/env_dpdk/pci_idxd.o 00:02:14.575 CC lib/env_dpdk/pci_event.o 00:02:14.575 CC lib/env_dpdk/sigbus_handler.o 00:02:14.575 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:14.575 CC lib/env_dpdk/pci_dpdk.o 00:02:14.575 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:14.836 LIB libspdk_rdma_provider.a 00:02:14.836 LIB libspdk_conf.a 00:02:14.836 SO libspdk_rdma_provider.so.6.0 00:02:15.097 LIB libspdk_rdma_utils.a 00:02:15.097 LIB libspdk_json.a 00:02:15.097 SO libspdk_conf.so.6.0 00:02:15.097 SO libspdk_rdma_utils.so.1.0 00:02:15.097 SYMLINK libspdk_rdma_provider.so 00:02:15.097 SO libspdk_json.so.6.0 00:02:15.097 SYMLINK libspdk_conf.so 00:02:15.097 SYMLINK libspdk_rdma_utils.so 00:02:15.098 SYMLINK libspdk_json.so 00:02:15.098 LIB libspdk_idxd.a 00:02:15.359 SO libspdk_idxd.so.12.0 00:02:15.359 LIB libspdk_vmd.a 00:02:15.359 SO libspdk_vmd.so.6.0 00:02:15.359 SYMLINK libspdk_idxd.so 00:02:15.359 SYMLINK libspdk_vmd.so 00:02:15.359 CC lib/jsonrpc/jsonrpc_server.o 00:02:15.359 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:15.359 CC lib/jsonrpc/jsonrpc_client.o 00:02:15.359 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:15.620 LIB libspdk_jsonrpc.a 00:02:15.620 SO libspdk_jsonrpc.so.6.0 00:02:15.881 SYMLINK libspdk_jsonrpc.so 00:02:15.881 LIB libspdk_env_dpdk.a 00:02:15.881 SO libspdk_env_dpdk.so.14.1 00:02:16.141 SYMLINK libspdk_env_dpdk.so 00:02:16.141 CC lib/rpc/rpc.o 00:02:16.400 LIB libspdk_rpc.a 00:02:16.400 SO libspdk_rpc.so.6.0 00:02:16.400 SYMLINK libspdk_rpc.so 00:02:16.972 CC lib/keyring/keyring.o 00:02:16.972 CC lib/keyring/keyring_rpc.o 00:02:16.972 CC lib/notify/notify.o 00:02:16.972 CC lib/notify/notify_rpc.o 00:02:16.972 CC lib/trace/trace.o 00:02:16.972 CC lib/trace/trace_flags.o 00:02:16.972 CC lib/trace/trace_rpc.o 00:02:16.972 LIB libspdk_notify.a 00:02:16.972 SO libspdk_notify.so.6.0 00:02:16.972 LIB libspdk_keyring.a 00:02:16.972 LIB libspdk_trace.a 00:02:16.972 SO libspdk_keyring.so.1.0 00:02:17.234 SYMLINK libspdk_notify.so 00:02:17.234 SO libspdk_trace.so.10.0 00:02:17.234 SYMLINK libspdk_keyring.so 00:02:17.234 SYMLINK libspdk_trace.so 00:02:17.495 CC lib/sock/sock.o 00:02:17.495 CC lib/sock/sock_rpc.o 00:02:17.495 CC lib/thread/thread.o 00:02:17.495 CC lib/thread/iobuf.o 00:02:18.068 LIB libspdk_sock.a 00:02:18.068 SO libspdk_sock.so.10.0 00:02:18.068 SYMLINK libspdk_sock.so 00:02:18.330 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:18.330 CC lib/nvme/nvme_ctrlr.o 00:02:18.330 CC lib/nvme/nvme_fabric.o 00:02:18.330 CC lib/nvme/nvme_ns_cmd.o 00:02:18.330 CC lib/nvme/nvme_ns.o 00:02:18.330 CC lib/nvme/nvme_pcie_common.o 00:02:18.330 CC lib/nvme/nvme_pcie.o 00:02:18.330 CC lib/nvme/nvme_qpair.o 00:02:18.330 CC lib/nvme/nvme.o 00:02:18.330 CC lib/nvme/nvme_discovery.o 00:02:18.330 CC lib/nvme/nvme_quirks.o 00:02:18.330 CC lib/nvme/nvme_transport.o 00:02:18.330 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:18.330 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:18.330 CC lib/nvme/nvme_tcp.o 00:02:18.330 CC lib/nvme/nvme_opal.o 00:02:18.330 CC lib/nvme/nvme_io_msg.o 00:02:18.330 CC lib/nvme/nvme_poll_group.o 00:02:18.330 CC lib/nvme/nvme_zns.o 00:02:18.330 CC lib/nvme/nvme_stubs.o 00:02:18.330 CC lib/nvme/nvme_auth.o 00:02:18.330 CC lib/nvme/nvme_cuse.o 00:02:18.330 CC lib/nvme/nvme_vfio_user.o 00:02:18.330 CC lib/nvme/nvme_rdma.o 00:02:18.928 LIB libspdk_thread.a 00:02:18.928 SO libspdk_thread.so.10.1 00:02:18.928 SYMLINK libspdk_thread.so 00:02:19.190 CC lib/virtio/virtio_vfio_user.o 00:02:19.190 CC lib/virtio/virtio.o 00:02:19.190 CC lib/virtio/virtio_vhost_user.o 00:02:19.190 CC lib/virtio/virtio_pci.o 00:02:19.190 CC lib/accel/accel.o 00:02:19.190 CC lib/accel/accel_rpc.o 00:02:19.190 CC lib/accel/accel_sw.o 00:02:19.190 CC lib/vfu_tgt/tgt_endpoint.o 00:02:19.190 CC lib/vfu_tgt/tgt_rpc.o 00:02:19.190 CC lib/blob/blobstore.o 00:02:19.190 CC lib/blob/request.o 00:02:19.190 CC lib/init/json_config.o 00:02:19.190 CC lib/blob/zeroes.o 00:02:19.190 CC lib/init/subsystem.o 00:02:19.190 CC lib/init/rpc.o 00:02:19.190 CC lib/blob/blob_bs_dev.o 00:02:19.190 CC lib/init/subsystem_rpc.o 00:02:19.451 LIB libspdk_init.a 00:02:19.451 SO libspdk_init.so.5.0 00:02:19.451 LIB libspdk_virtio.a 00:02:19.713 LIB libspdk_vfu_tgt.a 00:02:19.713 SO libspdk_virtio.so.7.0 00:02:19.713 SYMLINK libspdk_init.so 00:02:19.713 SO libspdk_vfu_tgt.so.3.0 00:02:19.713 SYMLINK libspdk_virtio.so 00:02:19.713 SYMLINK libspdk_vfu_tgt.so 00:02:19.975 CC lib/event/app.o 00:02:19.975 CC lib/event/reactor.o 00:02:19.975 CC lib/event/log_rpc.o 00:02:19.975 CC lib/event/app_rpc.o 00:02:19.975 CC lib/event/scheduler_static.o 00:02:19.975 LIB libspdk_accel.a 00:02:20.236 SO libspdk_accel.so.15.1 00:02:20.236 LIB libspdk_nvme.a 00:02:20.236 SYMLINK libspdk_accel.so 00:02:20.236 SO libspdk_nvme.so.13.1 00:02:20.236 LIB libspdk_event.a 00:02:20.236 SO libspdk_event.so.14.0 00:02:20.497 SYMLINK libspdk_event.so 00:02:20.497 SYMLINK libspdk_nvme.so 00:02:20.497 CC lib/bdev/bdev.o 00:02:20.497 CC lib/bdev/bdev_rpc.o 00:02:20.497 CC lib/bdev/bdev_zone.o 00:02:20.497 CC lib/bdev/part.o 00:02:20.497 CC lib/bdev/scsi_nvme.o 00:02:21.886 LIB libspdk_blob.a 00:02:21.886 SO libspdk_blob.so.11.0 00:02:21.886 SYMLINK libspdk_blob.so 00:02:22.148 CC lib/lvol/lvol.o 00:02:22.148 CC lib/blobfs/blobfs.o 00:02:22.148 CC lib/blobfs/tree.o 00:02:22.725 LIB libspdk_bdev.a 00:02:22.725 SO libspdk_bdev.so.15.1 00:02:22.987 SYMLINK libspdk_bdev.so 00:02:22.987 LIB libspdk_blobfs.a 00:02:22.987 SO libspdk_blobfs.so.10.0 00:02:22.987 LIB libspdk_lvol.a 00:02:22.987 SO libspdk_lvol.so.10.0 00:02:22.988 SYMLINK libspdk_blobfs.so 00:02:23.247 SYMLINK libspdk_lvol.so 00:02:23.247 CC lib/ftl/ftl_core.o 00:02:23.247 CC lib/ftl/ftl_init.o 00:02:23.247 CC lib/ftl/ftl_layout.o 00:02:23.247 CC lib/ftl/ftl_debug.o 00:02:23.247 CC lib/ftl/ftl_sb.o 00:02:23.247 CC lib/ftl/ftl_io.o 00:02:23.247 CC lib/ftl/ftl_l2p.o 00:02:23.247 CC lib/ftl/ftl_l2p_flat.o 00:02:23.247 CC lib/ftl/ftl_nv_cache.o 00:02:23.247 CC lib/ftl/ftl_band.o 00:02:23.247 CC lib/scsi/dev.o 00:02:23.247 CC lib/ftl/ftl_band_ops.o 00:02:23.247 CC lib/ftl/ftl_writer.o 00:02:23.247 CC lib/scsi/lun.o 00:02:23.247 CC lib/scsi/port.o 00:02:23.247 CC lib/ftl/ftl_rq.o 00:02:23.247 CC lib/ftl/ftl_reloc.o 00:02:23.247 CC lib/scsi/scsi.o 00:02:23.247 CC lib/nvmf/ctrlr.o 00:02:23.247 CC lib/ftl/ftl_l2p_cache.o 00:02:23.247 CC lib/scsi/scsi_bdev.o 00:02:23.247 CC lib/nvmf/ctrlr_discovery.o 00:02:23.247 CC lib/ftl/ftl_p2l.o 00:02:23.247 CC lib/scsi/scsi_pr.o 00:02:23.247 CC lib/nvmf/ctrlr_bdev.o 00:02:23.247 CC lib/ftl/mngt/ftl_mngt.o 00:02:23.247 CC lib/scsi/scsi_rpc.o 00:02:23.247 CC lib/ublk/ublk.o 00:02:23.247 CC lib/nvmf/subsystem.o 00:02:23.247 CC lib/ublk/ublk_rpc.o 00:02:23.247 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:23.247 CC lib/scsi/task.o 00:02:23.247 CC lib/nvmf/nvmf.o 00:02:23.247 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:23.247 CC lib/nvmf/nvmf_rpc.o 00:02:23.247 CC lib/nvmf/transport.o 00:02:23.247 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:23.247 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:23.247 CC lib/nvmf/tcp.o 00:02:23.247 CC lib/nbd/nbd.o 00:02:23.247 CC lib/nvmf/stubs.o 00:02:23.247 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:23.247 CC lib/nvmf/mdns_server.o 00:02:23.247 CC lib/nbd/nbd_rpc.o 00:02:23.247 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:23.247 CC lib/nvmf/vfio_user.o 00:02:23.247 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:23.247 CC lib/nvmf/rdma.o 00:02:23.247 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:23.247 CC lib/nvmf/auth.o 00:02:23.247 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:23.247 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:23.247 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:23.247 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:23.247 CC lib/ftl/utils/ftl_conf.o 00:02:23.247 CC lib/ftl/utils/ftl_md.o 00:02:23.247 CC lib/ftl/utils/ftl_mempool.o 00:02:23.247 CC lib/ftl/utils/ftl_bitmap.o 00:02:23.247 CC lib/ftl/utils/ftl_property.o 00:02:23.247 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:23.247 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:23.247 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:23.247 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:23.247 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:23.247 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:23.247 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:23.247 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:23.247 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:23.247 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:23.247 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:23.247 CC lib/ftl/base/ftl_base_dev.o 00:02:23.247 CC lib/ftl/base/ftl_base_bdev.o 00:02:23.247 CC lib/ftl/ftl_trace.o 00:02:23.816 LIB libspdk_nbd.a 00:02:23.816 LIB libspdk_scsi.a 00:02:23.816 SO libspdk_nbd.so.7.0 00:02:23.816 SO libspdk_scsi.so.9.0 00:02:23.816 SYMLINK libspdk_nbd.so 00:02:23.816 LIB libspdk_ublk.a 00:02:24.077 SO libspdk_ublk.so.3.0 00:02:24.077 SYMLINK libspdk_scsi.so 00:02:24.077 SYMLINK libspdk_ublk.so 00:02:24.337 LIB libspdk_ftl.a 00:02:24.337 CC lib/iscsi/conn.o 00:02:24.337 CC lib/iscsi/init_grp.o 00:02:24.337 CC lib/iscsi/iscsi.o 00:02:24.337 CC lib/iscsi/md5.o 00:02:24.337 CC lib/iscsi/param.o 00:02:24.337 CC lib/iscsi/portal_grp.o 00:02:24.337 CC lib/iscsi/tgt_node.o 00:02:24.337 CC lib/iscsi/iscsi_subsystem.o 00:02:24.337 CC lib/iscsi/iscsi_rpc.o 00:02:24.337 CC lib/iscsi/task.o 00:02:24.337 CC lib/vhost/vhost.o 00:02:24.337 CC lib/vhost/vhost_rpc.o 00:02:24.337 CC lib/vhost/vhost_scsi.o 00:02:24.337 CC lib/vhost/vhost_blk.o 00:02:24.337 CC lib/vhost/rte_vhost_user.o 00:02:24.337 SO libspdk_ftl.so.9.0 00:02:24.597 SYMLINK libspdk_ftl.so 00:02:25.169 LIB libspdk_nvmf.a 00:02:25.169 SO libspdk_nvmf.so.19.0 00:02:25.169 LIB libspdk_vhost.a 00:02:25.169 SO libspdk_vhost.so.8.0 00:02:25.430 SYMLINK libspdk_nvmf.so 00:02:25.430 SYMLINK libspdk_vhost.so 00:02:25.430 LIB libspdk_iscsi.a 00:02:25.430 SO libspdk_iscsi.so.8.0 00:02:25.691 SYMLINK libspdk_iscsi.so 00:02:26.279 CC module/vfu_device/vfu_virtio_blk.o 00:02:26.279 CC module/vfu_device/vfu_virtio.o 00:02:26.279 CC module/vfu_device/vfu_virtio_scsi.o 00:02:26.279 CC module/vfu_device/vfu_virtio_rpc.o 00:02:26.279 CC module/env_dpdk/env_dpdk_rpc.o 00:02:26.279 CC module/sock/posix/posix.o 00:02:26.279 LIB libspdk_env_dpdk_rpc.a 00:02:26.279 CC module/keyring/file/keyring.o 00:02:26.539 CC module/keyring/file/keyring_rpc.o 00:02:26.539 CC module/keyring/linux/keyring.o 00:02:26.539 CC module/blob/bdev/blob_bdev.o 00:02:26.539 CC module/keyring/linux/keyring_rpc.o 00:02:26.539 CC module/scheduler/gscheduler/gscheduler.o 00:02:26.539 CC module/accel/dsa/accel_dsa.o 00:02:26.539 CC module/accel/dsa/accel_dsa_rpc.o 00:02:26.539 CC module/accel/ioat/accel_ioat.o 00:02:26.539 CC module/accel/error/accel_error.o 00:02:26.539 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:26.539 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:26.539 CC module/accel/error/accel_error_rpc.o 00:02:26.539 CC module/accel/ioat/accel_ioat_rpc.o 00:02:26.539 SO libspdk_env_dpdk_rpc.so.6.0 00:02:26.539 CC module/accel/iaa/accel_iaa.o 00:02:26.539 CC module/accel/iaa/accel_iaa_rpc.o 00:02:26.539 SYMLINK libspdk_env_dpdk_rpc.so 00:02:26.539 LIB libspdk_scheduler_gscheduler.a 00:02:26.539 LIB libspdk_keyring_linux.a 00:02:26.539 LIB libspdk_keyring_file.a 00:02:26.539 LIB libspdk_scheduler_dpdk_governor.a 00:02:26.539 SO libspdk_scheduler_gscheduler.so.4.0 00:02:26.539 SO libspdk_keyring_file.so.1.0 00:02:26.539 SO libspdk_keyring_linux.so.1.0 00:02:26.539 LIB libspdk_accel_ioat.a 00:02:26.539 LIB libspdk_accel_error.a 00:02:26.539 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:26.539 LIB libspdk_scheduler_dynamic.a 00:02:26.799 LIB libspdk_accel_iaa.a 00:02:26.799 SO libspdk_accel_error.so.2.0 00:02:26.799 LIB libspdk_accel_dsa.a 00:02:26.799 SO libspdk_accel_ioat.so.6.0 00:02:26.799 SYMLINK libspdk_scheduler_gscheduler.so 00:02:26.799 SYMLINK libspdk_keyring_file.so 00:02:26.799 SO libspdk_scheduler_dynamic.so.4.0 00:02:26.799 LIB libspdk_blob_bdev.a 00:02:26.799 SYMLINK libspdk_keyring_linux.so 00:02:26.799 SO libspdk_accel_iaa.so.3.0 00:02:26.799 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:26.799 SO libspdk_accel_dsa.so.5.0 00:02:26.799 SO libspdk_blob_bdev.so.11.0 00:02:26.799 SYMLINK libspdk_accel_ioat.so 00:02:26.799 SYMLINK libspdk_accel_error.so 00:02:26.799 SYMLINK libspdk_scheduler_dynamic.so 00:02:26.799 LIB libspdk_vfu_device.a 00:02:26.799 SYMLINK libspdk_accel_iaa.so 00:02:26.799 SYMLINK libspdk_accel_dsa.so 00:02:26.799 SYMLINK libspdk_blob_bdev.so 00:02:26.799 SO libspdk_vfu_device.so.3.0 00:02:27.059 SYMLINK libspdk_vfu_device.so 00:02:27.059 LIB libspdk_sock_posix.a 00:02:27.319 SO libspdk_sock_posix.so.6.0 00:02:27.319 SYMLINK libspdk_sock_posix.so 00:02:27.319 CC module/bdev/lvol/vbdev_lvol.o 00:02:27.319 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:27.319 CC module/bdev/delay/vbdev_delay.o 00:02:27.319 CC module/bdev/error/vbdev_error.o 00:02:27.319 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:27.319 CC module/bdev/error/vbdev_error_rpc.o 00:02:27.319 CC module/bdev/gpt/gpt.o 00:02:27.319 CC module/bdev/gpt/vbdev_gpt.o 00:02:27.319 CC module/bdev/raid/bdev_raid.o 00:02:27.319 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:27.319 CC module/bdev/malloc/bdev_malloc.o 00:02:27.319 CC module/bdev/raid/bdev_raid_rpc.o 00:02:27.319 CC module/bdev/raid/bdev_raid_sb.o 00:02:27.319 CC module/bdev/raid/raid0.o 00:02:27.319 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:27.319 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:27.319 CC module/bdev/raid/raid1.o 00:02:27.319 CC module/blobfs/bdev/blobfs_bdev.o 00:02:27.319 CC module/bdev/raid/concat.o 00:02:27.319 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:27.319 CC module/bdev/aio/bdev_aio_rpc.o 00:02:27.319 CC module/bdev/split/vbdev_split.o 00:02:27.319 CC module/bdev/aio/bdev_aio.o 00:02:27.319 CC module/bdev/ftl/bdev_ftl.o 00:02:27.319 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:27.319 CC module/bdev/split/vbdev_split_rpc.o 00:02:27.319 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:27.319 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:27.319 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:27.319 CC module/bdev/passthru/vbdev_passthru.o 00:02:27.319 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:27.319 CC module/bdev/null/bdev_null.o 00:02:27.319 CC module/bdev/nvme/bdev_nvme.o 00:02:27.319 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:27.319 CC module/bdev/null/bdev_null_rpc.o 00:02:27.319 CC module/bdev/nvme/nvme_rpc.o 00:02:27.319 CC module/bdev/iscsi/bdev_iscsi.o 00:02:27.319 CC module/bdev/nvme/vbdev_opal.o 00:02:27.319 CC module/bdev/nvme/bdev_mdns_client.o 00:02:27.319 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:27.319 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:27.319 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:27.577 LIB libspdk_blobfs_bdev.a 00:02:27.891 SO libspdk_blobfs_bdev.so.6.0 00:02:27.891 LIB libspdk_bdev_split.a 00:02:27.891 LIB libspdk_bdev_error.a 00:02:27.891 LIB libspdk_bdev_gpt.a 00:02:27.891 LIB libspdk_bdev_null.a 00:02:27.891 SO libspdk_bdev_error.so.6.0 00:02:27.891 SO libspdk_bdev_gpt.so.6.0 00:02:27.891 SO libspdk_bdev_split.so.6.0 00:02:27.891 SYMLINK libspdk_blobfs_bdev.so 00:02:27.891 LIB libspdk_bdev_ftl.a 00:02:27.891 LIB libspdk_bdev_zone_block.a 00:02:27.891 LIB libspdk_bdev_delay.a 00:02:27.891 LIB libspdk_bdev_passthru.a 00:02:27.891 SO libspdk_bdev_null.so.6.0 00:02:27.891 LIB libspdk_bdev_aio.a 00:02:27.891 SO libspdk_bdev_ftl.so.6.0 00:02:27.891 SYMLINK libspdk_bdev_error.so 00:02:27.891 LIB libspdk_bdev_iscsi.a 00:02:27.891 SYMLINK libspdk_bdev_gpt.so 00:02:27.891 SO libspdk_bdev_zone_block.so.6.0 00:02:27.891 SO libspdk_bdev_passthru.so.6.0 00:02:27.891 SYMLINK libspdk_bdev_split.so 00:02:27.891 SO libspdk_bdev_delay.so.6.0 00:02:27.891 LIB libspdk_bdev_malloc.a 00:02:27.891 SO libspdk_bdev_aio.so.6.0 00:02:27.891 SYMLINK libspdk_bdev_null.so 00:02:27.891 SO libspdk_bdev_iscsi.so.6.0 00:02:27.891 SO libspdk_bdev_malloc.so.6.0 00:02:27.891 SYMLINK libspdk_bdev_ftl.so 00:02:27.891 LIB libspdk_bdev_lvol.a 00:02:27.891 SYMLINK libspdk_bdev_passthru.so 00:02:27.891 SYMLINK libspdk_bdev_delay.so 00:02:27.891 SYMLINK libspdk_bdev_zone_block.so 00:02:27.891 SYMLINK libspdk_bdev_aio.so 00:02:27.891 SYMLINK libspdk_bdev_iscsi.so 00:02:27.891 SO libspdk_bdev_lvol.so.6.0 00:02:27.891 SYMLINK libspdk_bdev_malloc.so 00:02:27.891 LIB libspdk_bdev_virtio.a 00:02:28.151 SYMLINK libspdk_bdev_lvol.so 00:02:28.151 SO libspdk_bdev_virtio.so.6.0 00:02:28.151 SYMLINK libspdk_bdev_virtio.so 00:02:28.411 LIB libspdk_bdev_raid.a 00:02:28.411 SO libspdk_bdev_raid.so.6.0 00:02:28.411 SYMLINK libspdk_bdev_raid.so 00:02:29.351 LIB libspdk_bdev_nvme.a 00:02:29.351 SO libspdk_bdev_nvme.so.7.0 00:02:29.611 SYMLINK libspdk_bdev_nvme.so 00:02:30.182 CC module/event/subsystems/iobuf/iobuf.o 00:02:30.183 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:30.183 CC module/event/subsystems/sock/sock.o 00:02:30.183 CC module/event/subsystems/keyring/keyring.o 00:02:30.183 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:30.183 CC module/event/subsystems/vmd/vmd.o 00:02:30.183 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:30.183 CC module/event/subsystems/scheduler/scheduler.o 00:02:30.183 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:30.444 LIB libspdk_event_scheduler.a 00:02:30.444 LIB libspdk_event_keyring.a 00:02:30.444 LIB libspdk_event_iobuf.a 00:02:30.444 LIB libspdk_event_vhost_blk.a 00:02:30.444 LIB libspdk_event_sock.a 00:02:30.444 LIB libspdk_event_vfu_tgt.a 00:02:30.444 LIB libspdk_event_vmd.a 00:02:30.444 SO libspdk_event_scheduler.so.4.0 00:02:30.444 SO libspdk_event_keyring.so.1.0 00:02:30.444 SO libspdk_event_iobuf.so.3.0 00:02:30.444 SO libspdk_event_vhost_blk.so.3.0 00:02:30.444 SO libspdk_event_sock.so.5.0 00:02:30.444 SO libspdk_event_vfu_tgt.so.3.0 00:02:30.444 SO libspdk_event_vmd.so.6.0 00:02:30.444 SYMLINK libspdk_event_keyring.so 00:02:30.444 SYMLINK libspdk_event_scheduler.so 00:02:30.444 SYMLINK libspdk_event_vhost_blk.so 00:02:30.444 SYMLINK libspdk_event_iobuf.so 00:02:30.444 SYMLINK libspdk_event_sock.so 00:02:30.444 SYMLINK libspdk_event_vfu_tgt.so 00:02:30.444 SYMLINK libspdk_event_vmd.so 00:02:31.017 CC module/event/subsystems/accel/accel.o 00:02:31.017 LIB libspdk_event_accel.a 00:02:31.017 SO libspdk_event_accel.so.6.0 00:02:31.017 SYMLINK libspdk_event_accel.so 00:02:31.614 CC module/event/subsystems/bdev/bdev.o 00:02:31.614 LIB libspdk_event_bdev.a 00:02:31.614 SO libspdk_event_bdev.so.6.0 00:02:31.876 SYMLINK libspdk_event_bdev.so 00:02:32.137 CC module/event/subsystems/nbd/nbd.o 00:02:32.137 CC module/event/subsystems/ublk/ublk.o 00:02:32.137 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:32.137 CC module/event/subsystems/scsi/scsi.o 00:02:32.137 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:32.137 LIB libspdk_event_ublk.a 00:02:32.398 LIB libspdk_event_nbd.a 00:02:32.398 SO libspdk_event_ublk.so.3.0 00:02:32.398 LIB libspdk_event_scsi.a 00:02:32.398 SO libspdk_event_nbd.so.6.0 00:02:32.398 SO libspdk_event_scsi.so.6.0 00:02:32.398 SYMLINK libspdk_event_ublk.so 00:02:32.398 LIB libspdk_event_nvmf.a 00:02:32.398 SYMLINK libspdk_event_nbd.so 00:02:32.398 SO libspdk_event_nvmf.so.6.0 00:02:32.398 SYMLINK libspdk_event_scsi.so 00:02:32.398 SYMLINK libspdk_event_nvmf.so 00:02:32.660 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:32.660 CC module/event/subsystems/iscsi/iscsi.o 00:02:32.922 LIB libspdk_event_vhost_scsi.a 00:02:32.922 LIB libspdk_event_iscsi.a 00:02:32.922 SO libspdk_event_vhost_scsi.so.3.0 00:02:32.922 SO libspdk_event_iscsi.so.6.0 00:02:32.922 SYMLINK libspdk_event_vhost_scsi.so 00:02:33.183 SYMLINK libspdk_event_iscsi.so 00:02:33.183 SO libspdk.so.6.0 00:02:33.183 SYMLINK libspdk.so 00:02:33.759 CC test/rpc_client/rpc_client_test.o 00:02:33.759 TEST_HEADER include/spdk/accel.h 00:02:33.759 TEST_HEADER include/spdk/accel_module.h 00:02:33.759 TEST_HEADER include/spdk/assert.h 00:02:33.759 TEST_HEADER include/spdk/barrier.h 00:02:33.759 TEST_HEADER include/spdk/base64.h 00:02:33.759 CC app/trace_record/trace_record.o 00:02:33.759 CXX app/trace/trace.o 00:02:33.759 TEST_HEADER include/spdk/bdev.h 00:02:33.759 TEST_HEADER include/spdk/bdev_module.h 00:02:33.759 TEST_HEADER include/spdk/bdev_zone.h 00:02:33.759 TEST_HEADER include/spdk/bit_array.h 00:02:33.759 CC app/spdk_top/spdk_top.o 00:02:33.759 CC app/spdk_nvme_perf/perf.o 00:02:33.759 TEST_HEADER include/spdk/bit_pool.h 00:02:33.759 TEST_HEADER include/spdk/blob_bdev.h 00:02:33.759 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:33.759 CC app/spdk_lspci/spdk_lspci.o 00:02:33.759 TEST_HEADER include/spdk/blobfs.h 00:02:33.759 CC app/spdk_nvme_discover/discovery_aer.o 00:02:33.759 TEST_HEADER include/spdk/blob.h 00:02:33.759 TEST_HEADER include/spdk/conf.h 00:02:33.759 TEST_HEADER include/spdk/config.h 00:02:33.759 TEST_HEADER include/spdk/cpuset.h 00:02:33.759 TEST_HEADER include/spdk/crc16.h 00:02:33.759 TEST_HEADER include/spdk/crc32.h 00:02:33.759 TEST_HEADER include/spdk/crc64.h 00:02:33.759 CC app/spdk_nvme_identify/identify.o 00:02:33.759 TEST_HEADER include/spdk/dif.h 00:02:33.759 TEST_HEADER include/spdk/dma.h 00:02:33.759 TEST_HEADER include/spdk/endian.h 00:02:33.759 TEST_HEADER include/spdk/env_dpdk.h 00:02:33.759 TEST_HEADER include/spdk/env.h 00:02:33.759 TEST_HEADER include/spdk/fd_group.h 00:02:33.759 TEST_HEADER include/spdk/event.h 00:02:33.759 TEST_HEADER include/spdk/fd.h 00:02:33.759 TEST_HEADER include/spdk/file.h 00:02:33.759 TEST_HEADER include/spdk/gpt_spec.h 00:02:33.759 TEST_HEADER include/spdk/ftl.h 00:02:33.759 TEST_HEADER include/spdk/hexlify.h 00:02:33.759 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:33.759 TEST_HEADER include/spdk/idxd.h 00:02:33.759 TEST_HEADER include/spdk/histogram_data.h 00:02:33.759 TEST_HEADER include/spdk/idxd_spec.h 00:02:33.759 TEST_HEADER include/spdk/init.h 00:02:33.759 TEST_HEADER include/spdk/ioat.h 00:02:33.759 TEST_HEADER include/spdk/ioat_spec.h 00:02:33.759 TEST_HEADER include/spdk/iscsi_spec.h 00:02:33.759 TEST_HEADER include/spdk/json.h 00:02:33.759 TEST_HEADER include/spdk/jsonrpc.h 00:02:33.759 TEST_HEADER include/spdk/keyring.h 00:02:33.759 TEST_HEADER include/spdk/keyring_module.h 00:02:33.759 CC app/iscsi_tgt/iscsi_tgt.o 00:02:33.759 CC app/nvmf_tgt/nvmf_main.o 00:02:33.759 TEST_HEADER include/spdk/likely.h 00:02:33.759 TEST_HEADER include/spdk/log.h 00:02:33.759 CC app/spdk_dd/spdk_dd.o 00:02:33.759 TEST_HEADER include/spdk/lvol.h 00:02:33.759 TEST_HEADER include/spdk/mmio.h 00:02:33.759 TEST_HEADER include/spdk/memory.h 00:02:33.759 TEST_HEADER include/spdk/nbd.h 00:02:33.759 TEST_HEADER include/spdk/notify.h 00:02:33.759 TEST_HEADER include/spdk/nvme.h 00:02:33.759 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:33.759 TEST_HEADER include/spdk/nvme_intel.h 00:02:33.759 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:33.759 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:33.759 TEST_HEADER include/spdk/nvme_spec.h 00:02:33.759 TEST_HEADER include/spdk/nvme_zns.h 00:02:33.759 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:33.759 TEST_HEADER include/spdk/nvmf.h 00:02:33.759 TEST_HEADER include/spdk/nvmf_spec.h 00:02:33.759 TEST_HEADER include/spdk/nvmf_transport.h 00:02:33.759 TEST_HEADER include/spdk/opal.h 00:02:33.759 TEST_HEADER include/spdk/opal_spec.h 00:02:33.759 CC app/spdk_tgt/spdk_tgt.o 00:02:33.759 TEST_HEADER include/spdk/pci_ids.h 00:02:33.759 TEST_HEADER include/spdk/pipe.h 00:02:33.759 TEST_HEADER include/spdk/queue.h 00:02:33.759 TEST_HEADER include/spdk/scheduler.h 00:02:33.759 TEST_HEADER include/spdk/reduce.h 00:02:33.759 TEST_HEADER include/spdk/rpc.h 00:02:33.759 TEST_HEADER include/spdk/sock.h 00:02:33.759 TEST_HEADER include/spdk/scsi.h 00:02:33.759 TEST_HEADER include/spdk/scsi_spec.h 00:02:33.759 TEST_HEADER include/spdk/stdinc.h 00:02:33.759 TEST_HEADER include/spdk/string.h 00:02:33.759 TEST_HEADER include/spdk/thread.h 00:02:33.759 TEST_HEADER include/spdk/tree.h 00:02:33.759 TEST_HEADER include/spdk/trace_parser.h 00:02:33.759 TEST_HEADER include/spdk/trace.h 00:02:33.759 TEST_HEADER include/spdk/ublk.h 00:02:33.759 TEST_HEADER include/spdk/uuid.h 00:02:33.759 TEST_HEADER include/spdk/version.h 00:02:33.759 TEST_HEADER include/spdk/util.h 00:02:33.759 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:33.759 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:33.759 TEST_HEADER include/spdk/vhost.h 00:02:33.759 TEST_HEADER include/spdk/vmd.h 00:02:33.759 TEST_HEADER include/spdk/zipf.h 00:02:33.759 TEST_HEADER include/spdk/xor.h 00:02:33.759 CXX test/cpp_headers/accel.o 00:02:33.759 CXX test/cpp_headers/accel_module.o 00:02:33.759 CXX test/cpp_headers/assert.o 00:02:33.759 CXX test/cpp_headers/barrier.o 00:02:33.759 CXX test/cpp_headers/base64.o 00:02:33.759 CXX test/cpp_headers/bdev.o 00:02:33.759 CXX test/cpp_headers/bdev_module.o 00:02:33.759 CXX test/cpp_headers/bit_array.o 00:02:33.759 CXX test/cpp_headers/bdev_zone.o 00:02:33.759 CXX test/cpp_headers/bit_pool.o 00:02:33.759 CXX test/cpp_headers/blob_bdev.o 00:02:33.759 CXX test/cpp_headers/blobfs_bdev.o 00:02:33.759 CXX test/cpp_headers/blobfs.o 00:02:33.759 CXX test/cpp_headers/blob.o 00:02:33.759 CXX test/cpp_headers/config.o 00:02:33.759 CXX test/cpp_headers/conf.o 00:02:33.760 CXX test/cpp_headers/cpuset.o 00:02:33.760 CXX test/cpp_headers/crc16.o 00:02:33.760 CXX test/cpp_headers/crc32.o 00:02:33.760 CXX test/cpp_headers/crc64.o 00:02:33.760 CXX test/cpp_headers/dif.o 00:02:33.760 CXX test/cpp_headers/endian.o 00:02:33.760 CXX test/cpp_headers/dma.o 00:02:33.760 CXX test/cpp_headers/env_dpdk.o 00:02:33.760 CXX test/cpp_headers/env.o 00:02:33.760 CXX test/cpp_headers/event.o 00:02:33.760 CXX test/cpp_headers/fd_group.o 00:02:33.760 CXX test/cpp_headers/fd.o 00:02:33.760 CXX test/cpp_headers/file.o 00:02:33.760 CXX test/cpp_headers/ftl.o 00:02:33.760 CXX test/cpp_headers/gpt_spec.o 00:02:33.760 CXX test/cpp_headers/hexlify.o 00:02:33.760 CXX test/cpp_headers/histogram_data.o 00:02:33.760 CXX test/cpp_headers/idxd_spec.o 00:02:33.760 CXX test/cpp_headers/init.o 00:02:33.760 CXX test/cpp_headers/idxd.o 00:02:33.760 CXX test/cpp_headers/ioat.o 00:02:33.760 CXX test/cpp_headers/json.o 00:02:33.760 CXX test/cpp_headers/ioat_spec.o 00:02:33.760 CXX test/cpp_headers/iscsi_spec.o 00:02:33.760 CXX test/cpp_headers/jsonrpc.o 00:02:33.760 CXX test/cpp_headers/keyring.o 00:02:33.760 CXX test/cpp_headers/keyring_module.o 00:02:33.760 CXX test/cpp_headers/likely.o 00:02:33.760 CC test/thread/poller_perf/poller_perf.o 00:02:33.760 CXX test/cpp_headers/log.o 00:02:33.760 CXX test/cpp_headers/memory.o 00:02:33.760 CXX test/cpp_headers/lvol.o 00:02:33.760 CXX test/cpp_headers/mmio.o 00:02:33.760 CXX test/cpp_headers/nvme.o 00:02:33.760 CXX test/cpp_headers/nvme_intel.o 00:02:33.760 CXX test/cpp_headers/nbd.o 00:02:33.760 CXX test/cpp_headers/notify.o 00:02:33.760 CXX test/cpp_headers/nvme_spec.o 00:02:33.760 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:33.760 CXX test/cpp_headers/nvme_zns.o 00:02:33.760 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:33.760 CC test/env/vtophys/vtophys.o 00:02:33.760 CXX test/cpp_headers/nvmf.o 00:02:33.760 CXX test/cpp_headers/nvme_ocssd.o 00:02:33.760 CXX test/cpp_headers/nvmf_spec.o 00:02:33.760 CXX test/cpp_headers/nvmf_cmd.o 00:02:33.760 CC test/app/jsoncat/jsoncat.o 00:02:33.760 LINK rpc_client_test 00:02:33.760 CC examples/util/zipf/zipf.o 00:02:33.760 CXX test/cpp_headers/nvmf_transport.o 00:02:33.760 CXX test/cpp_headers/opal.o 00:02:33.760 CC test/env/pci/pci_ut.o 00:02:33.760 CXX test/cpp_headers/opal_spec.o 00:02:33.760 CC test/env/memory/memory_ut.o 00:02:33.760 CXX test/cpp_headers/pci_ids.o 00:02:33.760 CXX test/cpp_headers/pipe.o 00:02:33.760 CXX test/cpp_headers/queue.o 00:02:33.760 CXX test/cpp_headers/reduce.o 00:02:33.760 CXX test/cpp_headers/rpc.o 00:02:33.760 CXX test/cpp_headers/sock.o 00:02:33.760 CXX test/cpp_headers/scsi_spec.o 00:02:33.760 CXX test/cpp_headers/scheduler.o 00:02:33.760 CXX test/cpp_headers/scsi.o 00:02:33.760 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:33.760 CXX test/cpp_headers/stdinc.o 00:02:33.760 CC test/app/histogram_perf/histogram_perf.o 00:02:34.036 CXX test/cpp_headers/thread.o 00:02:34.036 CXX test/cpp_headers/string.o 00:02:34.036 LINK spdk_lspci 00:02:34.036 CXX test/cpp_headers/trace.o 00:02:34.036 CXX test/cpp_headers/tree.o 00:02:34.036 CXX test/cpp_headers/ublk.o 00:02:34.036 CXX test/cpp_headers/trace_parser.o 00:02:34.036 CXX test/cpp_headers/util.o 00:02:34.036 CXX test/cpp_headers/uuid.o 00:02:34.036 CC examples/ioat/verify/verify.o 00:02:34.036 CXX test/cpp_headers/version.o 00:02:34.036 CXX test/cpp_headers/vfio_user_spec.o 00:02:34.036 CXX test/cpp_headers/vmd.o 00:02:34.036 CXX test/cpp_headers/vfio_user_pci.o 00:02:34.036 CXX test/cpp_headers/vhost.o 00:02:34.036 CC test/app/stub/stub.o 00:02:34.036 CXX test/cpp_headers/xor.o 00:02:34.036 CXX test/cpp_headers/zipf.o 00:02:34.036 CC examples/ioat/perf/perf.o 00:02:34.036 CC app/fio/nvme/fio_plugin.o 00:02:34.036 CC test/dma/test_dma/test_dma.o 00:02:34.036 LINK spdk_nvme_discover 00:02:34.036 CC app/fio/bdev/fio_plugin.o 00:02:34.036 CC test/app/bdev_svc/bdev_svc.o 00:02:34.036 LINK interrupt_tgt 00:02:34.037 LINK spdk_trace_record 00:02:34.037 LINK iscsi_tgt 00:02:34.301 LINK nvmf_tgt 00:02:34.301 CC test/env/mem_callbacks/mem_callbacks.o 00:02:34.301 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:34.301 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:34.301 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:34.301 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:34.301 LINK spdk_tgt 00:02:34.301 LINK vtophys 00:02:34.651 LINK zipf 00:02:34.651 LINK poller_perf 00:02:34.651 LINK env_dpdk_post_init 00:02:34.651 LINK histogram_perf 00:02:34.651 LINK jsoncat 00:02:34.651 LINK bdev_svc 00:02:34.651 LINK spdk_dd 00:02:34.651 LINK verify 00:02:34.651 LINK ioat_perf 00:02:34.651 LINK stub 00:02:34.651 LINK spdk_trace 00:02:34.912 LINK spdk_nvme_perf 00:02:34.912 LINK test_dma 00:02:34.912 LINK spdk_bdev 00:02:34.912 LINK nvme_fuzz 00:02:34.912 CC test/event/event_perf/event_perf.o 00:02:34.912 CC examples/vmd/led/led.o 00:02:34.912 LINK spdk_top 00:02:34.912 CC examples/vmd/lsvmd/lsvmd.o 00:02:34.912 CC examples/idxd/perf/perf.o 00:02:34.912 CC test/event/reactor/reactor.o 00:02:34.912 CC examples/sock/hello_world/hello_sock.o 00:02:34.912 CC test/event/reactor_perf/reactor_perf.o 00:02:34.912 LINK pci_ut 00:02:34.912 LINK spdk_nvme 00:02:34.912 LINK vhost_fuzz 00:02:34.912 CC test/event/scheduler/scheduler.o 00:02:34.913 CC test/event/app_repeat/app_repeat.o 00:02:34.913 CC examples/thread/thread/thread_ex.o 00:02:35.174 CC app/vhost/vhost.o 00:02:35.174 LINK led 00:02:35.174 LINK event_perf 00:02:35.174 LINK mem_callbacks 00:02:35.174 LINK lsvmd 00:02:35.174 LINK reactor 00:02:35.174 LINK reactor_perf 00:02:35.174 LINK app_repeat 00:02:35.174 LINK spdk_nvme_identify 00:02:35.174 LINK hello_sock 00:02:35.174 LINK idxd_perf 00:02:35.175 LINK scheduler 00:02:35.175 LINK thread 00:02:35.175 LINK vhost 00:02:35.436 CC test/nvme/overhead/overhead.o 00:02:35.436 CC test/nvme/startup/startup.o 00:02:35.436 CC test/nvme/aer/aer.o 00:02:35.436 CC test/nvme/simple_copy/simple_copy.o 00:02:35.436 CC test/nvme/reserve/reserve.o 00:02:35.436 CC test/nvme/e2edp/nvme_dp.o 00:02:35.436 CC test/nvme/connect_stress/connect_stress.o 00:02:35.436 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:35.436 CC test/nvme/reset/reset.o 00:02:35.436 CC test/nvme/sgl/sgl.o 00:02:35.436 CC test/nvme/fdp/fdp.o 00:02:35.436 CC test/nvme/fused_ordering/fused_ordering.o 00:02:35.436 CC test/nvme/boot_partition/boot_partition.o 00:02:35.436 CC test/nvme/err_injection/err_injection.o 00:02:35.436 CC test/nvme/cuse/cuse.o 00:02:35.436 CC test/nvme/compliance/nvme_compliance.o 00:02:35.436 CC test/blobfs/mkfs/mkfs.o 00:02:35.436 CC test/accel/dif/dif.o 00:02:35.436 LINK memory_ut 00:02:35.436 CC test/lvol/esnap/esnap.o 00:02:35.436 LINK doorbell_aers 00:02:35.436 LINK boot_partition 00:02:35.436 LINK startup 00:02:35.436 LINK connect_stress 00:02:35.436 LINK err_injection 00:02:35.698 LINK reserve 00:02:35.698 LINK fused_ordering 00:02:35.698 LINK mkfs 00:02:35.698 LINK overhead 00:02:35.698 LINK simple_copy 00:02:35.698 LINK reset 00:02:35.698 LINK sgl 00:02:35.698 LINK nvme_dp 00:02:35.698 LINK aer 00:02:35.698 LINK fdp 00:02:35.698 LINK nvme_compliance 00:02:35.698 CC examples/nvme/hotplug/hotplug.o 00:02:35.698 CC examples/nvme/abort/abort.o 00:02:35.698 CC examples/nvme/arbitration/arbitration.o 00:02:35.698 CC examples/nvme/reconnect/reconnect.o 00:02:35.698 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:35.698 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:35.698 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:35.698 CC examples/nvme/hello_world/hello_world.o 00:02:35.698 CC examples/accel/perf/accel_perf.o 00:02:35.698 CC examples/blob/hello_world/hello_blob.o 00:02:35.698 CC examples/blob/cli/blobcli.o 00:02:35.698 LINK dif 00:02:35.960 LINK pmr_persistence 00:02:35.960 LINK cmb_copy 00:02:35.960 LINK iscsi_fuzz 00:02:35.960 LINK hello_world 00:02:35.960 LINK hotplug 00:02:35.960 LINK arbitration 00:02:35.960 LINK reconnect 00:02:35.960 LINK abort 00:02:35.960 LINK hello_blob 00:02:36.221 LINK nvme_manage 00:02:36.221 LINK accel_perf 00:02:36.221 LINK blobcli 00:02:36.482 CC test/bdev/bdevio/bdevio.o 00:02:36.482 LINK cuse 00:02:36.742 LINK bdevio 00:02:36.742 CC examples/bdev/bdevperf/bdevperf.o 00:02:36.742 CC examples/bdev/hello_world/hello_bdev.o 00:02:37.004 LINK hello_bdev 00:02:37.577 LINK bdevperf 00:02:38.150 CC examples/nvmf/nvmf/nvmf.o 00:02:38.410 LINK nvmf 00:02:39.797 LINK esnap 00:02:40.059 00:02:40.059 real 0m51.087s 00:02:40.059 user 6m31.430s 00:02:40.059 sys 4m36.156s 00:02:40.059 19:57:37 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:40.059 19:57:37 make -- common/autotest_common.sh@10 -- $ set +x 00:02:40.059 ************************************ 00:02:40.059 END TEST make 00:02:40.059 ************************************ 00:02:40.059 19:57:37 -- common/autotest_common.sh@1142 -- $ return 0 00:02:40.059 19:57:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:40.059 19:57:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:40.059 19:57:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:40.059 19:57:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.059 19:57:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:40.059 19:57:37 -- pm/common@44 -- $ pid=647426 00:02:40.059 19:57:37 -- pm/common@50 -- $ kill -TERM 647426 00:02:40.059 19:57:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.059 19:57:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:40.059 19:57:37 -- pm/common@44 -- $ pid=647427 00:02:40.059 19:57:37 -- pm/common@50 -- $ kill -TERM 647427 00:02:40.059 19:57:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.059 19:57:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:40.059 19:57:37 -- pm/common@44 -- $ pid=647429 00:02:40.059 19:57:37 -- pm/common@50 -- $ kill -TERM 647429 00:02:40.059 19:57:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.059 19:57:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:40.059 19:57:37 -- pm/common@44 -- $ pid=647453 00:02:40.059 19:57:37 -- pm/common@50 -- $ sudo -E kill -TERM 647453 00:02:40.059 19:57:37 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:40.059 19:57:37 -- nvmf/common.sh@7 -- # uname -s 00:02:40.059 19:57:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:40.059 19:57:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:40.059 19:57:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:40.059 19:57:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:40.059 19:57:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:40.059 19:57:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:40.059 19:57:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:40.059 19:57:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:40.059 19:57:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:40.059 19:57:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:40.321 19:57:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:40.321 19:57:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:40.321 19:57:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:40.321 19:57:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:40.321 19:57:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:40.321 19:57:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:40.321 19:57:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:40.321 19:57:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:40.321 19:57:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:40.321 19:57:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:40.321 19:57:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.321 19:57:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.321 19:57:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.321 19:57:37 -- paths/export.sh@5 -- # export PATH 00:02:40.321 19:57:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.321 19:57:37 -- nvmf/common.sh@47 -- # : 0 00:02:40.321 19:57:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:40.321 19:57:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:40.321 19:57:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:40.321 19:57:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:40.321 19:57:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:40.321 19:57:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:40.321 19:57:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:40.321 19:57:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:40.321 19:57:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:40.321 19:57:37 -- spdk/autotest.sh@32 -- # uname -s 00:02:40.321 19:57:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:40.321 19:57:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:40.321 19:57:37 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:40.321 19:57:37 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:40.321 19:57:37 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:40.321 19:57:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:40.321 19:57:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:40.321 19:57:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:40.321 19:57:37 -- spdk/autotest.sh@48 -- # udevadm_pid=711103 00:02:40.321 19:57:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:40.321 19:57:37 -- pm/common@17 -- # local monitor 00:02:40.321 19:57:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.321 19:57:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.321 19:57:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:40.321 19:57:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.321 19:57:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.321 19:57:37 -- pm/common@25 -- # sleep 1 00:02:40.321 19:57:37 -- pm/common@21 -- # date +%s 00:02:40.321 19:57:37 -- pm/common@21 -- # date +%s 00:02:40.321 19:57:37 -- pm/common@21 -- # date +%s 00:02:40.321 19:57:37 -- pm/common@21 -- # date +%s 00:02:40.321 19:57:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721066257 00:02:40.321 19:57:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721066257 00:02:40.321 19:57:37 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721066257 00:02:40.321 19:57:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721066257 00:02:40.321 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721066257_collect-vmstat.pm.log 00:02:40.321 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721066257_collect-cpu-load.pm.log 00:02:40.321 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721066257_collect-cpu-temp.pm.log 00:02:40.321 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721066257_collect-bmc-pm.bmc.pm.log 00:02:41.269 19:57:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:41.269 19:57:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:41.269 19:57:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:41.269 19:57:38 -- common/autotest_common.sh@10 -- # set +x 00:02:41.269 19:57:38 -- spdk/autotest.sh@59 -- # create_test_list 00:02:41.269 19:57:38 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:41.269 19:57:38 -- common/autotest_common.sh@10 -- # set +x 00:02:41.269 19:57:38 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:41.269 19:57:38 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:41.269 19:57:38 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:41.269 19:57:38 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:41.269 19:57:38 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:41.269 19:57:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:41.269 19:57:38 -- common/autotest_common.sh@1455 -- # uname 00:02:41.269 19:57:38 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:41.269 19:57:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:41.269 19:57:38 -- common/autotest_common.sh@1475 -- # uname 00:02:41.269 19:57:38 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:41.269 19:57:38 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:41.269 19:57:38 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:41.269 19:57:38 -- spdk/autotest.sh@72 -- # hash lcov 00:02:41.269 19:57:38 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:41.269 19:57:38 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:41.269 --rc lcov_branch_coverage=1 00:02:41.269 --rc lcov_function_coverage=1 00:02:41.269 --rc genhtml_branch_coverage=1 00:02:41.269 --rc genhtml_function_coverage=1 00:02:41.269 --rc genhtml_legend=1 00:02:41.269 --rc geninfo_all_blocks=1 00:02:41.269 ' 00:02:41.269 19:57:38 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:41.269 --rc lcov_branch_coverage=1 00:02:41.269 --rc lcov_function_coverage=1 00:02:41.269 --rc genhtml_branch_coverage=1 00:02:41.269 --rc genhtml_function_coverage=1 00:02:41.269 --rc genhtml_legend=1 00:02:41.269 --rc geninfo_all_blocks=1 00:02:41.269 ' 00:02:41.269 19:57:38 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:41.269 --rc lcov_branch_coverage=1 00:02:41.269 --rc lcov_function_coverage=1 00:02:41.269 --rc genhtml_branch_coverage=1 00:02:41.269 --rc genhtml_function_coverage=1 00:02:41.269 --rc genhtml_legend=1 00:02:41.269 --rc geninfo_all_blocks=1 00:02:41.269 --no-external' 00:02:41.269 19:57:38 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:41.269 --rc lcov_branch_coverage=1 00:02:41.269 --rc lcov_function_coverage=1 00:02:41.269 --rc genhtml_branch_coverage=1 00:02:41.269 --rc genhtml_function_coverage=1 00:02:41.269 --rc genhtml_legend=1 00:02:41.269 --rc geninfo_all_blocks=1 00:02:41.269 --no-external' 00:02:41.269 19:57:38 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:41.269 lcov: LCOV version 1.14 00:02:41.269 19:57:38 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:46.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:46.566 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:46.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:46.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:04.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:04.688 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:11.280 19:58:07 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:11.280 19:58:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:11.280 19:58:07 -- common/autotest_common.sh@10 -- # set +x 00:03:11.280 19:58:07 -- spdk/autotest.sh@91 -- # rm -f 00:03:11.280 19:58:07 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.830 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:13.830 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:14.092 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:14.092 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:14.092 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:14.092 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:14.092 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:14.092 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:14.092 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:14.092 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:14.092 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:14.092 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:14.353 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:14.353 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:14.353 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:14.353 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:14.353 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:14.615 19:58:11 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:14.615 19:58:11 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:14.615 19:58:11 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:14.615 19:58:11 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:14.615 19:58:11 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:14.615 19:58:11 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:14.615 19:58:11 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:14.615 19:58:11 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:14.615 19:58:11 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:14.615 19:58:11 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:14.615 19:58:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:14.615 19:58:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:14.615 19:58:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:14.615 19:58:11 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:14.615 19:58:11 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:14.615 No valid GPT data, bailing 00:03:14.615 19:58:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:14.615 19:58:11 -- scripts/common.sh@391 -- # pt= 00:03:14.615 19:58:11 -- scripts/common.sh@392 -- # return 1 00:03:14.615 19:58:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:14.615 1+0 records in 00:03:14.615 1+0 records out 00:03:14.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00228026 s, 460 MB/s 00:03:14.615 19:58:11 -- spdk/autotest.sh@118 -- # sync 00:03:14.615 19:58:11 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:14.615 19:58:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:14.615 19:58:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:22.762 19:58:19 -- spdk/autotest.sh@124 -- # uname -s 00:03:22.762 19:58:19 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:22.762 19:58:19 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:22.762 19:58:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.762 19:58:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.762 19:58:19 -- common/autotest_common.sh@10 -- # set +x 00:03:22.762 ************************************ 00:03:22.762 START TEST setup.sh 00:03:22.762 ************************************ 00:03:22.762 19:58:19 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:22.762 * Looking for test storage... 00:03:22.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:22.762 19:58:20 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:22.762 19:58:20 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:22.762 19:58:20 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:22.762 19:58:20 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.762 19:58:20 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.762 19:58:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:22.762 ************************************ 00:03:22.762 START TEST acl 00:03:22.762 ************************************ 00:03:22.762 19:58:20 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:23.023 * Looking for test storage... 00:03:23.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:23.023 19:58:20 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:23.023 19:58:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:23.023 19:58:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:23.023 19:58:20 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:23.023 19:58:20 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:23.023 19:58:20 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:23.023 19:58:20 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:23.023 19:58:20 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:23.023 19:58:20 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:23.023 19:58:20 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:23.023 19:58:20 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:23.023 19:58:20 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:23.023 19:58:20 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:23.023 19:58:20 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:23.023 19:58:20 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:23.023 19:58:20 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:27.234 19:58:24 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:27.234 19:58:24 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:27.234 19:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.234 19:58:24 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:27.234 19:58:24 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.234 19:58:24 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:30.534 Hugepages 00:03:30.534 node hugesize free / total 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.534 00:03:30.534 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.534 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:30.535 19:58:27 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:30.535 19:58:27 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.535 19:58:27 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.535 19:58:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:30.535 ************************************ 00:03:30.535 START TEST denied 00:03:30.535 ************************************ 00:03:30.535 19:58:27 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:30.535 19:58:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:30.535 19:58:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:30.535 19:58:27 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:30.535 19:58:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.535 19:58:27 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:34.816 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:34.816 19:58:31 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:34.816 19:58:31 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:34.816 19:58:31 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:34.816 19:58:31 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:34.816 19:58:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:34.816 19:58:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:34.816 19:58:31 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:34.816 19:58:31 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:34.816 19:58:31 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.816 19:58:31 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.021 00:03:39.021 real 0m8.506s 00:03:39.021 user 0m2.832s 00:03:39.021 sys 0m4.906s 00:03:39.021 19:58:36 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.021 19:58:36 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:39.021 ************************************ 00:03:39.021 END TEST denied 00:03:39.021 ************************************ 00:03:39.021 19:58:36 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:39.021 19:58:36 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:39.021 19:58:36 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.021 19:58:36 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.021 19:58:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:39.021 ************************************ 00:03:39.021 START TEST allowed 00:03:39.021 ************************************ 00:03:39.021 19:58:36 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:39.021 19:58:36 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:39.021 19:58:36 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:39.021 19:58:36 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:39.021 19:58:36 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.021 19:58:36 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:44.313 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:44.313 19:58:41 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:44.313 19:58:41 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:44.313 19:58:41 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:44.313 19:58:41 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.313 19:58:41 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.515 00:03:48.515 real 0m9.166s 00:03:48.515 user 0m2.698s 00:03:48.515 sys 0m4.690s 00:03:48.515 19:58:45 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.515 19:58:45 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:48.515 ************************************ 00:03:48.515 END TEST allowed 00:03:48.515 ************************************ 00:03:48.515 19:58:45 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:48.515 00:03:48.515 real 0m25.382s 00:03:48.515 user 0m8.500s 00:03:48.515 sys 0m14.539s 00:03:48.515 19:58:45 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.515 19:58:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:48.515 ************************************ 00:03:48.515 END TEST acl 00:03:48.515 ************************************ 00:03:48.515 19:58:45 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:48.515 19:58:45 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:48.515 19:58:45 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.515 19:58:45 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.515 19:58:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:48.515 ************************************ 00:03:48.515 START TEST hugepages 00:03:48.515 ************************************ 00:03:48.515 19:58:45 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:48.515 * Looking for test storage... 00:03:48.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 102882532 kB' 'MemAvailable: 106368488 kB' 'Buffers: 2704 kB' 'Cached: 14470540 kB' 'SwapCached: 0 kB' 'Active: 11514336 kB' 'Inactive: 3523448 kB' 'Active(anon): 11040152 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567960 kB' 'Mapped: 201892 kB' 'Shmem: 10475612 kB' 'KReclaimable: 529268 kB' 'Slab: 1401172 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 871904 kB' 'KernelStack: 27296 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460892 kB' 'Committed_AS: 12621372 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.515 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.516 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:48.517 19:58:45 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:48.517 19:58:45 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.517 19:58:45 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.517 19:58:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.517 ************************************ 00:03:48.517 START TEST default_setup 00:03:48.517 ************************************ 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.517 19:58:45 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.820 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:51.820 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:51.820 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:51.820 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:51.820 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:51.820 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:51.820 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:51.820 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:51.820 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:51.820 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:51.820 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:51.820 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:51.820 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:51.820 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:51.820 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:51.820 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:52.081 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105045972 kB' 'MemAvailable: 108531928 kB' 'Buffers: 2704 kB' 'Cached: 14470672 kB' 'SwapCached: 0 kB' 'Active: 11531628 kB' 'Inactive: 3523448 kB' 'Active(anon): 11057444 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584536 kB' 'Mapped: 202244 kB' 'Shmem: 10475744 kB' 'KReclaimable: 529268 kB' 'Slab: 1398600 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869332 kB' 'KernelStack: 27344 kB' 'PageTables: 8932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12638072 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235460 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.347 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.348 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105047324 kB' 'MemAvailable: 108533280 kB' 'Buffers: 2704 kB' 'Cached: 14470676 kB' 'SwapCached: 0 kB' 'Active: 11530444 kB' 'Inactive: 3523448 kB' 'Active(anon): 11056260 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583876 kB' 'Mapped: 202104 kB' 'Shmem: 10475748 kB' 'KReclaimable: 529268 kB' 'Slab: 1398568 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869300 kB' 'KernelStack: 27344 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12638092 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.349 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105047072 kB' 'MemAvailable: 108533028 kB' 'Buffers: 2704 kB' 'Cached: 14470692 kB' 'SwapCached: 0 kB' 'Active: 11530524 kB' 'Inactive: 3523448 kB' 'Active(anon): 11056340 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583876 kB' 'Mapped: 202104 kB' 'Shmem: 10475764 kB' 'KReclaimable: 529268 kB' 'Slab: 1398568 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869300 kB' 'KernelStack: 27344 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12638112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.352 nr_hugepages=1024 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.352 resv_hugepages=0 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.352 surplus_hugepages=0 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.352 anon_hugepages=0 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105047072 kB' 'MemAvailable: 108533028 kB' 'Buffers: 2704 kB' 'Cached: 14470712 kB' 'SwapCached: 0 kB' 'Active: 11530896 kB' 'Inactive: 3523448 kB' 'Active(anon): 11056712 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584228 kB' 'Mapped: 202104 kB' 'Shmem: 10475784 kB' 'KReclaimable: 529268 kB' 'Slab: 1398568 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869300 kB' 'KernelStack: 27360 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12638136 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.352 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.353 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52577412 kB' 'MemUsed: 13081596 kB' 'SwapCached: 0 kB' 'Active: 4821044 kB' 'Inactive: 3298656 kB' 'Active(anon): 4668484 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3298656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7809352 kB' 'Mapped: 118224 kB' 'AnonPages: 313556 kB' 'Shmem: 4358136 kB' 'KernelStack: 16152 kB' 'PageTables: 5112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396100 kB' 'Slab: 907712 kB' 'SReclaimable: 396100 kB' 'SUnreclaim: 511612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.354 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:52.355 node0=1024 expecting 1024 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:52.355 00:03:52.355 real 0m3.960s 00:03:52.355 user 0m1.574s 00:03:52.355 sys 0m2.384s 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.355 19:58:49 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:52.355 ************************************ 00:03:52.355 END TEST default_setup 00:03:52.355 ************************************ 00:03:52.616 19:58:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:52.616 19:58:49 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:52.616 19:58:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.616 19:58:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.616 19:58:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.616 ************************************ 00:03:52.616 START TEST per_node_1G_alloc 00:03:52.616 ************************************ 00:03:52.616 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:52.616 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:52.616 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:52.616 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:52.616 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:52.616 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.617 19:58:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.924 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.924 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.924 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.924 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.924 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.924 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.924 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.924 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.924 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.924 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:55.924 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.924 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.924 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.924 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.924 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.924 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.924 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105074364 kB' 'MemAvailable: 108560320 kB' 'Buffers: 2704 kB' 'Cached: 14470832 kB' 'SwapCached: 0 kB' 'Active: 11532076 kB' 'Inactive: 3523448 kB' 'Active(anon): 11057892 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584860 kB' 'Mapped: 201264 kB' 'Shmem: 10475904 kB' 'KReclaimable: 529268 kB' 'Slab: 1398428 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869160 kB' 'KernelStack: 27248 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12627844 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.201 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105074960 kB' 'MemAvailable: 108560916 kB' 'Buffers: 2704 kB' 'Cached: 14470852 kB' 'SwapCached: 0 kB' 'Active: 11531068 kB' 'Inactive: 3523448 kB' 'Active(anon): 11056884 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583724 kB' 'Mapped: 201200 kB' 'Shmem: 10475924 kB' 'KReclaimable: 529268 kB' 'Slab: 1398424 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869156 kB' 'KernelStack: 27264 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12627864 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:03:56.202 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.203 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105074844 kB' 'MemAvailable: 108560800 kB' 'Buffers: 2704 kB' 'Cached: 14470852 kB' 'SwapCached: 0 kB' 'Active: 11530992 kB' 'Inactive: 3523448 kB' 'Active(anon): 11056808 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584120 kB' 'Mapped: 201124 kB' 'Shmem: 10475924 kB' 'KReclaimable: 529268 kB' 'Slab: 1398396 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869128 kB' 'KernelStack: 27232 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12629496 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.204 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.205 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.206 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:56.207 nr_hugepages=1024 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.207 resv_hugepages=0 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.207 surplus_hugepages=0 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.207 anon_hugepages=0 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105074972 kB' 'MemAvailable: 108560928 kB' 'Buffers: 2704 kB' 'Cached: 14470872 kB' 'SwapCached: 0 kB' 'Active: 11531292 kB' 'Inactive: 3523448 kB' 'Active(anon): 11057108 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584384 kB' 'Mapped: 201124 kB' 'Shmem: 10475944 kB' 'KReclaimable: 529268 kB' 'Slab: 1398396 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869128 kB' 'KernelStack: 27168 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12627908 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.207 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.208 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.209 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53645444 kB' 'MemUsed: 12013564 kB' 'SwapCached: 0 kB' 'Active: 4820380 kB' 'Inactive: 3298656 kB' 'Active(anon): 4667820 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3298656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7809436 kB' 'Mapped: 117768 kB' 'AnonPages: 312764 kB' 'Shmem: 4358220 kB' 'KernelStack: 16024 kB' 'PageTables: 4668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396100 kB' 'Slab: 907916 kB' 'SReclaimable: 396100 kB' 'SUnreclaim: 511816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.210 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.211 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51427344 kB' 'MemUsed: 9252528 kB' 'SwapCached: 0 kB' 'Active: 6712036 kB' 'Inactive: 224792 kB' 'Active(anon): 6390412 kB' 'Inactive(anon): 0 kB' 'Active(file): 321624 kB' 'Inactive(file): 224792 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6664168 kB' 'Mapped: 83356 kB' 'AnonPages: 272768 kB' 'Shmem: 6117752 kB' 'KernelStack: 11320 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133168 kB' 'Slab: 490480 kB' 'SReclaimable: 133168 kB' 'SUnreclaim: 357312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.212 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.213 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:56.214 node0=512 expecting 512 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:56.214 node1=512 expecting 512 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:56.214 00:03:56.214 real 0m3.796s 00:03:56.214 user 0m1.569s 00:03:56.214 sys 0m2.285s 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.214 19:58:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:56.214 ************************************ 00:03:56.214 END TEST per_node_1G_alloc 00:03:56.214 ************************************ 00:03:56.476 19:58:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:56.476 19:58:53 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:56.476 19:58:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.476 19:58:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.476 19:58:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.476 ************************************ 00:03:56.476 START TEST even_2G_alloc 00:03:56.476 ************************************ 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.476 19:58:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.779 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.779 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.779 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.779 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.779 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.779 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.779 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.779 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.779 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.779 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:59.779 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.779 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.780 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.780 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.780 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.780 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.780 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:00.047 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105091624 kB' 'MemAvailable: 108577580 kB' 'Buffers: 2704 kB' 'Cached: 14471016 kB' 'SwapCached: 0 kB' 'Active: 11532828 kB' 'Inactive: 3523448 kB' 'Active(anon): 11058644 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585900 kB' 'Mapped: 201204 kB' 'Shmem: 10476088 kB' 'KReclaimable: 529268 kB' 'Slab: 1398752 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869484 kB' 'KernelStack: 27520 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12630588 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235716 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.048 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105091540 kB' 'MemAvailable: 108577496 kB' 'Buffers: 2704 kB' 'Cached: 14471020 kB' 'SwapCached: 0 kB' 'Active: 11532788 kB' 'Inactive: 3523448 kB' 'Active(anon): 11058604 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585584 kB' 'Mapped: 201148 kB' 'Shmem: 10476092 kB' 'KReclaimable: 529268 kB' 'Slab: 1398752 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869484 kB' 'KernelStack: 27456 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12630604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235732 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.049 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.050 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105091688 kB' 'MemAvailable: 108577644 kB' 'Buffers: 2704 kB' 'Cached: 14471036 kB' 'SwapCached: 0 kB' 'Active: 11532684 kB' 'Inactive: 3523448 kB' 'Active(anon): 11058500 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585652 kB' 'Mapped: 201140 kB' 'Shmem: 10476108 kB' 'KReclaimable: 529268 kB' 'Slab: 1398808 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869540 kB' 'KernelStack: 27520 kB' 'PageTables: 9332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12629016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235732 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.051 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.052 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.053 nr_hugepages=1024 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.053 resv_hugepages=0 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.053 surplus_hugepages=0 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.053 anon_hugepages=0 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105090804 kB' 'MemAvailable: 108576760 kB' 'Buffers: 2704 kB' 'Cached: 14471060 kB' 'SwapCached: 0 kB' 'Active: 11532780 kB' 'Inactive: 3523448 kB' 'Active(anon): 11058596 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585704 kB' 'Mapped: 201148 kB' 'Shmem: 10476132 kB' 'KReclaimable: 529268 kB' 'Slab: 1398808 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869540 kB' 'KernelStack: 27392 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12629040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235716 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.053 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.054 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53662080 kB' 'MemUsed: 11996928 kB' 'SwapCached: 0 kB' 'Active: 4823584 kB' 'Inactive: 3298656 kB' 'Active(anon): 4671024 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3298656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7809608 kB' 'Mapped: 117784 kB' 'AnonPages: 315836 kB' 'Shmem: 4358392 kB' 'KernelStack: 16120 kB' 'PageTables: 4960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396100 kB' 'Slab: 908228 kB' 'SReclaimable: 396100 kB' 'SUnreclaim: 512128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51428800 kB' 'MemUsed: 9251072 kB' 'SwapCached: 0 kB' 'Active: 6708964 kB' 'Inactive: 224792 kB' 'Active(anon): 6387340 kB' 'Inactive(anon): 0 kB' 'Active(file): 321624 kB' 'Inactive(file): 224792 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6664172 kB' 'Mapped: 83356 kB' 'AnonPages: 269644 kB' 'Shmem: 6117756 kB' 'KernelStack: 11272 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133168 kB' 'Slab: 490580 kB' 'SReclaimable: 133168 kB' 'SUnreclaim: 357412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.318 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:00.319 node0=512 expecting 512 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:00.319 node1=512 expecting 512 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:00.319 00:04:00.319 real 0m3.800s 00:04:00.319 user 0m1.549s 00:04:00.319 sys 0m2.311s 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.319 19:58:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.319 ************************************ 00:04:00.319 END TEST even_2G_alloc 00:04:00.319 ************************************ 00:04:00.319 19:58:57 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:00.319 19:58:57 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:00.319 19:58:57 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.319 19:58:57 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.319 19:58:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.319 ************************************ 00:04:00.319 START TEST odd_alloc 00:04:00.319 ************************************ 00:04:00.319 19:58:57 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:00.319 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.320 19:58:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.625 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:03.625 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:03.625 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:03.625 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:03.625 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:03.625 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:03.625 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:03.625 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.625 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:03.625 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:03.625 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:03.625 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:03.625 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:03.625 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:03.625 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:03.625 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:03.625 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105094824 kB' 'MemAvailable: 108580780 kB' 'Buffers: 2704 kB' 'Cached: 14471196 kB' 'SwapCached: 0 kB' 'Active: 11530820 kB' 'Inactive: 3523448 kB' 'Active(anon): 11056636 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583248 kB' 'Mapped: 201328 kB' 'Shmem: 10476268 kB' 'KReclaimable: 529268 kB' 'Slab: 1399128 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869860 kB' 'KernelStack: 27312 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12628556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.889 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.890 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105095440 kB' 'MemAvailable: 108581396 kB' 'Buffers: 2704 kB' 'Cached: 14471200 kB' 'SwapCached: 0 kB' 'Active: 11530964 kB' 'Inactive: 3523448 kB' 'Active(anon): 11056780 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583404 kB' 'Mapped: 201232 kB' 'Shmem: 10476272 kB' 'KReclaimable: 529268 kB' 'Slab: 1399128 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869860 kB' 'KernelStack: 27296 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12628576 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.891 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105096524 kB' 'MemAvailable: 108582480 kB' 'Buffers: 2704 kB' 'Cached: 14471212 kB' 'SwapCached: 0 kB' 'Active: 11530780 kB' 'Inactive: 3523448 kB' 'Active(anon): 11056596 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583788 kB' 'Mapped: 201780 kB' 'Shmem: 10476284 kB' 'KReclaimable: 529268 kB' 'Slab: 1399144 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869876 kB' 'KernelStack: 27312 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12628596 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.892 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.893 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:03.894 nr_hugepages=1025 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.894 resv_hugepages=0 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.894 surplus_hugepages=0 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.894 anon_hugepages=0 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105096632 kB' 'MemAvailable: 108582588 kB' 'Buffers: 2704 kB' 'Cached: 14471232 kB' 'SwapCached: 0 kB' 'Active: 11530640 kB' 'Inactive: 3523448 kB' 'Active(anon): 11056456 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583600 kB' 'Mapped: 201156 kB' 'Shmem: 10476304 kB' 'KReclaimable: 529268 kB' 'Slab: 1399144 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869876 kB' 'KernelStack: 27216 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12628248 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.894 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.895 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.159 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53671440 kB' 'MemUsed: 11987568 kB' 'SwapCached: 0 kB' 'Active: 4819800 kB' 'Inactive: 3298656 kB' 'Active(anon): 4667240 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3298656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7809748 kB' 'Mapped: 117800 kB' 'AnonPages: 311976 kB' 'Shmem: 4358532 kB' 'KernelStack: 16056 kB' 'PageTables: 4764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396100 kB' 'Slab: 908484 kB' 'SReclaimable: 396100 kB' 'SUnreclaim: 512384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.160 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:04.161 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51425552 kB' 'MemUsed: 9254320 kB' 'SwapCached: 0 kB' 'Active: 6710492 kB' 'Inactive: 224792 kB' 'Active(anon): 6388868 kB' 'Inactive(anon): 0 kB' 'Active(file): 321624 kB' 'Inactive(file): 224792 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6664212 kB' 'Mapped: 83356 kB' 'AnonPages: 271196 kB' 'Shmem: 6117796 kB' 'KernelStack: 11144 kB' 'PageTables: 3564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133168 kB' 'Slab: 490660 kB' 'SReclaimable: 133168 kB' 'SUnreclaim: 357492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.162 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:04.163 node0=512 expecting 513 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:04.163 node1=513 expecting 512 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:04.163 00:04:04.163 real 0m3.819s 00:04:04.163 user 0m1.509s 00:04:04.163 sys 0m2.372s 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.163 19:59:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:04.163 ************************************ 00:04:04.163 END TEST odd_alloc 00:04:04.163 ************************************ 00:04:04.163 19:59:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:04.163 19:59:01 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:04.163 19:59:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.163 19:59:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.163 19:59:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:04.163 ************************************ 00:04:04.163 START TEST custom_alloc 00:04:04.163 ************************************ 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.163 19:59:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:07.465 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:07.465 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:07.465 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:07.465 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:07.465 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:07.465 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:07.465 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:07.465 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:07.465 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:07.465 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:07.465 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:07.465 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:07.465 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:07.465 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:07.465 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:07.465 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:07.465 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104031492 kB' 'MemAvailable: 107517448 kB' 'Buffers: 2704 kB' 'Cached: 14471368 kB' 'SwapCached: 0 kB' 'Active: 11531864 kB' 'Inactive: 3523448 kB' 'Active(anon): 11057680 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584040 kB' 'Mapped: 201316 kB' 'Shmem: 10476440 kB' 'KReclaimable: 529268 kB' 'Slab: 1399836 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 870568 kB' 'KernelStack: 27280 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12629132 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235428 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.733 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104031872 kB' 'MemAvailable: 107517828 kB' 'Buffers: 2704 kB' 'Cached: 14471372 kB' 'SwapCached: 0 kB' 'Active: 11531492 kB' 'Inactive: 3523448 kB' 'Active(anon): 11057308 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584192 kB' 'Mapped: 201176 kB' 'Shmem: 10476444 kB' 'KReclaimable: 529268 kB' 'Slab: 1399828 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 870560 kB' 'KernelStack: 27264 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12629152 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.734 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.735 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104033212 kB' 'MemAvailable: 107519168 kB' 'Buffers: 2704 kB' 'Cached: 14471388 kB' 'SwapCached: 0 kB' 'Active: 11531284 kB' 'Inactive: 3523448 kB' 'Active(anon): 11057100 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583936 kB' 'Mapped: 201176 kB' 'Shmem: 10476460 kB' 'KReclaimable: 529268 kB' 'Slab: 1399828 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 870560 kB' 'KernelStack: 27264 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12629176 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.736 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.737 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:07.738 nr_hugepages=1536 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.738 resv_hugepages=0 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.738 surplus_hugepages=0 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.738 anon_hugepages=0 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104033580 kB' 'MemAvailable: 107519536 kB' 'Buffers: 2704 kB' 'Cached: 14471412 kB' 'SwapCached: 0 kB' 'Active: 11531304 kB' 'Inactive: 3523448 kB' 'Active(anon): 11057120 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583936 kB' 'Mapped: 201176 kB' 'Shmem: 10476484 kB' 'KReclaimable: 529268 kB' 'Slab: 1399828 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 870560 kB' 'KernelStack: 27264 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12629328 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.738 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.739 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53661968 kB' 'MemUsed: 11997040 kB' 'SwapCached: 0 kB' 'Active: 4820352 kB' 'Inactive: 3298656 kB' 'Active(anon): 4667792 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3298656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7809908 kB' 'Mapped: 117820 kB' 'AnonPages: 312276 kB' 'Shmem: 4358692 kB' 'KernelStack: 16120 kB' 'PageTables: 4908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396100 kB' 'Slab: 909168 kB' 'SReclaimable: 396100 kB' 'SUnreclaim: 513068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.740 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 50370688 kB' 'MemUsed: 10309184 kB' 'SwapCached: 0 kB' 'Active: 6711320 kB' 'Inactive: 224792 kB' 'Active(anon): 6389696 kB' 'Inactive(anon): 0 kB' 'Active(file): 321624 kB' 'Inactive(file): 224792 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6664244 kB' 'Mapped: 83356 kB' 'AnonPages: 272040 kB' 'Shmem: 6117828 kB' 'KernelStack: 11176 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133168 kB' 'Slab: 490660 kB' 'SReclaimable: 133168 kB' 'SUnreclaim: 357492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.741 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.742 19:59:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.743 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.743 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.743 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.743 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.743 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:07.743 node0=512 expecting 512 00:04:07.743 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.743 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.743 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.743 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:07.743 node1=1024 expecting 1024 00:04:07.743 19:59:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:07.743 00:04:07.743 real 0m3.679s 00:04:07.743 user 0m1.466s 00:04:07.743 sys 0m2.251s 00:04:07.743 19:59:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.743 19:59:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.743 ************************************ 00:04:07.743 END TEST custom_alloc 00:04:07.743 ************************************ 00:04:08.004 19:59:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:08.004 19:59:05 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:08.004 19:59:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.004 19:59:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.004 19:59:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.004 ************************************ 00:04:08.005 START TEST no_shrink_alloc 00:04:08.005 ************************************ 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.005 19:59:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:11.305 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:11.305 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:11.305 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:11.305 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:11.305 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:11.305 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:11.305 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:11.305 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:11.305 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:11.305 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:11.305 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:11.305 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:11.305 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:11.305 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:11.305 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:11.305 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:11.305 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105079872 kB' 'MemAvailable: 108565828 kB' 'Buffers: 2704 kB' 'Cached: 14471560 kB' 'SwapCached: 0 kB' 'Active: 11533456 kB' 'Inactive: 3523448 kB' 'Active(anon): 11059272 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585448 kB' 'Mapped: 201332 kB' 'Shmem: 10476632 kB' 'KReclaimable: 529268 kB' 'Slab: 1399740 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 870472 kB' 'KernelStack: 27296 kB' 'PageTables: 8716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12633540 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235460 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105080316 kB' 'MemAvailable: 108566272 kB' 'Buffers: 2704 kB' 'Cached: 14471564 kB' 'SwapCached: 0 kB' 'Active: 11532636 kB' 'Inactive: 3523448 kB' 'Active(anon): 11058452 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585188 kB' 'Mapped: 201192 kB' 'Shmem: 10476636 kB' 'KReclaimable: 529268 kB' 'Slab: 1399720 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 870452 kB' 'KernelStack: 27328 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12632000 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.572 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105081268 kB' 'MemAvailable: 108567224 kB' 'Buffers: 2704 kB' 'Cached: 14471580 kB' 'SwapCached: 0 kB' 'Active: 11532696 kB' 'Inactive: 3523448 kB' 'Active(anon): 11058512 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585216 kB' 'Mapped: 201192 kB' 'Shmem: 10476652 kB' 'KReclaimable: 529268 kB' 'Slab: 1399720 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 870452 kB' 'KernelStack: 27216 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12631852 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:11.575 nr_hugepages=1024 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.575 resv_hugepages=0 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.575 surplus_hugepages=0 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.575 anon_hugepages=0 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105080408 kB' 'MemAvailable: 108566364 kB' 'Buffers: 2704 kB' 'Cached: 14471604 kB' 'SwapCached: 0 kB' 'Active: 11533384 kB' 'Inactive: 3523448 kB' 'Active(anon): 11059200 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585912 kB' 'Mapped: 201192 kB' 'Shmem: 10476676 kB' 'KReclaimable: 529268 kB' 'Slab: 1399720 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 870452 kB' 'KernelStack: 27376 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12633604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235508 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.575 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.576 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52619916 kB' 'MemUsed: 13039092 kB' 'SwapCached: 0 kB' 'Active: 4821176 kB' 'Inactive: 3298656 kB' 'Active(anon): 4668616 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3298656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7809980 kB' 'Mapped: 117836 kB' 'AnonPages: 313064 kB' 'Shmem: 4358764 kB' 'KernelStack: 16264 kB' 'PageTables: 5064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396100 kB' 'Slab: 909144 kB' 'SReclaimable: 396100 kB' 'SUnreclaim: 513044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.577 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.578 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.578 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.578 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.578 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.578 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.838 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.838 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.838 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.838 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.838 19:59:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.838 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:11.839 node0=1024 expecting 1024 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.839 19:59:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:15.144 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:15.144 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:15.144 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:15.144 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:15.144 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:15.144 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:15.144 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:15.144 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:15.144 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:15.144 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:15.144 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:15.144 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:15.144 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:15.144 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:15.144 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:15.144 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:15.144 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:15.144 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105069132 kB' 'MemAvailable: 108555088 kB' 'Buffers: 2704 kB' 'Cached: 14471696 kB' 'SwapCached: 0 kB' 'Active: 11535640 kB' 'Inactive: 3523448 kB' 'Active(anon): 11061456 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587836 kB' 'Mapped: 201296 kB' 'Shmem: 10476768 kB' 'KReclaimable: 529268 kB' 'Slab: 1398528 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869260 kB' 'KernelStack: 27248 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12634296 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235508 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.145 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105070496 kB' 'MemAvailable: 108556452 kB' 'Buffers: 2704 kB' 'Cached: 14471704 kB' 'SwapCached: 0 kB' 'Active: 11534408 kB' 'Inactive: 3523448 kB' 'Active(anon): 11060224 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586784 kB' 'Mapped: 201208 kB' 'Shmem: 10476776 kB' 'KReclaimable: 529268 kB' 'Slab: 1398520 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869252 kB' 'KernelStack: 27280 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12632584 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.146 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105069932 kB' 'MemAvailable: 108555888 kB' 'Buffers: 2704 kB' 'Cached: 14471720 kB' 'SwapCached: 0 kB' 'Active: 11534796 kB' 'Inactive: 3523448 kB' 'Active(anon): 11060612 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587164 kB' 'Mapped: 201208 kB' 'Shmem: 10476792 kB' 'KReclaimable: 529268 kB' 'Slab: 1398520 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869252 kB' 'KernelStack: 27408 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12634132 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.147 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.148 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:15.149 nr_hugepages=1024 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.149 resv_hugepages=0 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.149 surplus_hugepages=0 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.149 anon_hugepages=0 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105070580 kB' 'MemAvailable: 108556536 kB' 'Buffers: 2704 kB' 'Cached: 14471756 kB' 'SwapCached: 0 kB' 'Active: 11535056 kB' 'Inactive: 3523448 kB' 'Active(anon): 11060872 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587436 kB' 'Mapped: 201208 kB' 'Shmem: 10476828 kB' 'KReclaimable: 529268 kB' 'Slab: 1398488 kB' 'SReclaimable: 529268 kB' 'SUnreclaim: 869220 kB' 'KernelStack: 27488 kB' 'PageTables: 9256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12634732 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4416884 kB' 'DirectMap2M: 29865984 kB' 'DirectMap1G: 101711872 kB' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.149 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.150 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52611104 kB' 'MemUsed: 13047904 kB' 'SwapCached: 0 kB' 'Active: 4823812 kB' 'Inactive: 3298656 kB' 'Active(anon): 4671252 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3298656 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7809996 kB' 'Mapped: 117860 kB' 'AnonPages: 315772 kB' 'Shmem: 4358780 kB' 'KernelStack: 16104 kB' 'PageTables: 5060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396100 kB' 'Slab: 908112 kB' 'SReclaimable: 396100 kB' 'SUnreclaim: 512012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.151 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:15.152 node0=1024 expecting 1024 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:15.152 00:04:15.152 real 0m7.255s 00:04:15.152 user 0m2.770s 00:04:15.152 sys 0m4.527s 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.152 19:59:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:15.152 ************************************ 00:04:15.152 END TEST no_shrink_alloc 00:04:15.152 ************************************ 00:04:15.152 19:59:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:15.152 19:59:12 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:15.152 19:59:12 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:15.152 19:59:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:15.152 19:59:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.152 19:59:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:15.152 19:59:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.152 19:59:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:15.152 19:59:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:15.152 19:59:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.152 19:59:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:15.152 19:59:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.152 19:59:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:15.152 19:59:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:15.152 19:59:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:15.152 00:04:15.152 real 0m26.948s 00:04:15.152 user 0m10.691s 00:04:15.152 sys 0m16.549s 00:04:15.152 19:59:12 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.152 19:59:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:15.152 ************************************ 00:04:15.152 END TEST hugepages 00:04:15.152 ************************************ 00:04:15.414 19:59:12 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:15.414 19:59:12 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:15.414 19:59:12 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.414 19:59:12 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.414 19:59:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:15.414 ************************************ 00:04:15.414 START TEST driver 00:04:15.414 ************************************ 00:04:15.414 19:59:12 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:15.414 * Looking for test storage... 00:04:15.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:15.414 19:59:12 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:15.414 19:59:12 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.414 19:59:12 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.704 19:59:17 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:20.704 19:59:17 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.704 19:59:17 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.704 19:59:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:20.704 ************************************ 00:04:20.704 START TEST guess_driver 00:04:20.704 ************************************ 00:04:20.704 19:59:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:20.704 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:20.704 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:20.704 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:20.704 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:20.704 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:20.704 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:20.704 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:20.704 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:20.704 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:20.704 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:20.704 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:20.704 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:20.704 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:20.705 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:20.705 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:20.705 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:20.705 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:20.705 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:20.705 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:20.705 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:20.705 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:20.705 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:20.705 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:20.705 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:20.705 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:20.705 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:20.705 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:20.705 Looking for driver=vfio-pci 00:04:20.705 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.705 19:59:17 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:20.705 19:59:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.705 19:59:17 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.076 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.077 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.077 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.077 19:59:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.077 19:59:21 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.371 00:04:29.371 real 0m8.643s 00:04:29.371 user 0m2.817s 00:04:29.371 sys 0m5.064s 00:04:29.371 19:59:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.371 19:59:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:29.371 ************************************ 00:04:29.371 END TEST guess_driver 00:04:29.371 ************************************ 00:04:29.371 19:59:26 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:29.371 00:04:29.371 real 0m13.598s 00:04:29.371 user 0m4.324s 00:04:29.371 sys 0m7.703s 00:04:29.371 19:59:26 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.371 19:59:26 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:29.371 ************************************ 00:04:29.371 END TEST driver 00:04:29.371 ************************************ 00:04:29.371 19:59:26 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:29.371 19:59:26 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:29.371 19:59:26 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.371 19:59:26 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.371 19:59:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:29.371 ************************************ 00:04:29.371 START TEST devices 00:04:29.371 ************************************ 00:04:29.371 19:59:26 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:29.371 * Looking for test storage... 00:04:29.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:29.371 19:59:26 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:29.371 19:59:26 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:29.371 19:59:26 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.371 19:59:26 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:33.581 19:59:30 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:33.581 19:59:30 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:33.581 19:59:30 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:33.581 19:59:30 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:33.581 19:59:30 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:33.581 19:59:30 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:33.581 19:59:30 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:33.581 19:59:30 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:33.581 19:59:30 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:33.581 19:59:30 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:33.581 No valid GPT data, bailing 00:04:33.581 19:59:30 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:33.581 19:59:30 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:33.581 19:59:30 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:33.581 19:59:30 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:33.581 19:59:30 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:33.581 19:59:30 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:33.581 19:59:30 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:33.581 19:59:30 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.581 19:59:30 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.581 19:59:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:33.581 ************************************ 00:04:33.581 START TEST nvme_mount 00:04:33.581 ************************************ 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:33.581 19:59:30 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:34.153 Creating new GPT entries in memory. 00:04:34.153 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:34.153 other utilities. 00:04:34.153 19:59:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:34.153 19:59:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.153 19:59:31 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.153 19:59:31 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.153 19:59:31 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:35.123 Creating new GPT entries in memory. 00:04:35.123 The operation has completed successfully. 00:04:35.123 19:59:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:35.123 19:59:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.123 19:59:32 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 751118 00:04:35.123 19:59:32 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.123 19:59:32 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:35.123 19:59:32 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.123 19:59:32 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:35.123 19:59:32 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:35.123 19:59:32 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.384 19:59:32 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.384 19:59:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:35.384 19:59:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:35.384 19:59:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.384 19:59:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.384 19:59:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.384 19:59:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.384 19:59:32 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:35.384 19:59:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.384 19:59:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.384 19:59:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:35.384 19:59:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.384 19:59:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.384 19:59:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.683 19:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.683 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.683 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:38.683 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.683 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:38.683 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.683 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:38.683 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.683 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.683 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.683 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:38.683 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:38.683 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.683 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:38.945 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:38.945 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:38.945 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:38.945 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:38.945 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:38.945 19:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:38.945 19:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.205 19:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:39.205 19:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:39.205 19:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.205 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.205 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:39.205 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:39.205 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.205 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.205 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:39.205 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:39.205 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:39.206 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:39.206 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.206 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:39.206 19:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:39.206 19:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.206 19:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:42.506 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.507 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.768 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.768 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:42.768 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.768 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:42.768 19:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:42.768 19:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.768 19:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:42.768 19:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:42.768 19:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:42.768 19:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:42.768 19:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:42.768 19:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.768 19:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:42.768 19:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.768 19:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.768 19:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:42.768 19:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.768 19:59:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.768 19:59:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.069 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.070 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.070 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.070 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.070 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.070 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.330 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.330 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:46.330 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:46.330 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:46.330 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.330 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.330 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.330 19:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:46.330 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:46.330 00:04:46.330 real 0m13.270s 00:04:46.330 user 0m4.107s 00:04:46.330 sys 0m6.973s 00:04:46.330 19:59:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.330 19:59:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:46.330 ************************************ 00:04:46.330 END TEST nvme_mount 00:04:46.330 ************************************ 00:04:46.330 19:59:43 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:46.330 19:59:43 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:46.330 19:59:43 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.330 19:59:43 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.330 19:59:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:46.591 ************************************ 00:04:46.591 START TEST dm_mount 00:04:46.591 ************************************ 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:46.591 19:59:43 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:47.533 Creating new GPT entries in memory. 00:04:47.533 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:47.533 other utilities. 00:04:47.533 19:59:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:47.533 19:59:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:47.533 19:59:44 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:47.533 19:59:44 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:47.533 19:59:44 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:48.480 Creating new GPT entries in memory. 00:04:48.480 The operation has completed successfully. 00:04:48.480 19:59:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:48.480 19:59:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.480 19:59:45 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:48.480 19:59:45 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:48.480 19:59:45 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:49.421 The operation has completed successfully. 00:04:49.421 19:59:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:49.421 19:59:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:49.421 19:59:46 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 756294 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.682 19:59:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.053 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.314 19:59:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.610 19:59:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.872 19:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.872 19:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:56.872 19:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:56.872 19:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:56.872 19:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:56.872 19:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:56.872 19:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:56.872 19:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.872 19:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:56.872 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:56.872 19:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:56.872 19:59:54 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:56.872 00:04:56.872 real 0m10.350s 00:04:56.872 user 0m2.817s 00:04:56.872 sys 0m4.591s 00:04:56.872 19:59:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.872 19:59:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:56.872 ************************************ 00:04:56.872 END TEST dm_mount 00:04:56.872 ************************************ 00:04:56.872 19:59:54 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:56.872 19:59:54 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:56.872 19:59:54 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:56.872 19:59:54 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.872 19:59:54 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.872 19:59:54 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:56.872 19:59:54 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:56.872 19:59:54 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:57.134 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:57.134 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:57.134 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:57.134 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:57.134 19:59:54 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:57.134 19:59:54 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:57.134 19:59:54 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:57.134 19:59:54 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:57.134 19:59:54 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:57.134 19:59:54 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:57.134 19:59:54 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:57.134 00:04:57.134 real 0m28.169s 00:04:57.134 user 0m8.581s 00:04:57.134 sys 0m14.334s 00:04:57.134 19:59:54 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.134 19:59:54 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:57.134 ************************************ 00:04:57.134 END TEST devices 00:04:57.134 ************************************ 00:04:57.134 19:59:54 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:57.134 00:04:57.134 real 1m34.523s 00:04:57.134 user 0m32.253s 00:04:57.134 sys 0m53.418s 00:04:57.134 19:59:54 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.134 19:59:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:57.134 ************************************ 00:04:57.134 END TEST setup.sh 00:04:57.134 ************************************ 00:04:57.134 19:59:54 -- common/autotest_common.sh@1142 -- # return 0 00:04:57.134 19:59:54 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:00.439 Hugepages 00:05:00.439 node hugesize free / total 00:05:00.439 node0 1048576kB 0 / 0 00:05:00.439 node0 2048kB 2048 / 2048 00:05:00.439 node1 1048576kB 0 / 0 00:05:00.439 node1 2048kB 0 / 0 00:05:00.439 00:05:00.439 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:00.439 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:00.439 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:00.439 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:00.439 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:00.439 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:00.700 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:00.700 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:00.700 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:00.700 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:00.700 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:00.700 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:00.700 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:00.700 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:00.700 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:00.700 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:00.700 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:00.700 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:00.700 19:59:58 -- spdk/autotest.sh@130 -- # uname -s 00:05:00.700 19:59:58 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:00.700 19:59:58 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:00.700 19:59:58 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:04.007 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:04.007 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:04.007 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:04.007 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:04.007 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:04.007 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:04.007 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:04.007 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:04.007 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:04.007 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:04.269 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:04.269 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:04.269 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:04.269 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:04.269 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:04.269 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:06.179 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:06.179 20:00:03 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:07.561 20:00:04 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:07.561 20:00:04 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:07.561 20:00:04 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:07.561 20:00:04 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:07.561 20:00:04 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:07.561 20:00:04 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:07.561 20:00:04 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:07.561 20:00:04 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:07.561 20:00:04 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:07.561 20:00:04 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:07.561 20:00:04 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:07.561 20:00:04 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:10.859 Waiting for block devices as requested 00:05:10.859 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:10.859 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:10.859 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:10.859 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:10.859 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:11.121 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:11.121 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:11.121 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:11.381 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:11.381 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:11.642 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:11.642 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:11.642 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:11.902 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:11.902 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:11.902 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:11.902 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:12.163 20:00:09 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:12.163 20:00:09 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:12.163 20:00:09 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:12.163 20:00:09 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:05:12.163 20:00:09 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:12.163 20:00:09 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:12.163 20:00:09 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:12.163 20:00:09 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:12.163 20:00:09 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:12.163 20:00:09 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:12.163 20:00:09 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:12.163 20:00:09 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:12.163 20:00:09 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:12.163 20:00:09 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:05:12.163 20:00:09 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:12.163 20:00:09 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:12.163 20:00:09 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:12.163 20:00:09 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:12.163 20:00:09 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:12.424 20:00:09 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:12.424 20:00:09 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:12.424 20:00:09 -- common/autotest_common.sh@1557 -- # continue 00:05:12.424 20:00:09 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:12.424 20:00:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:12.424 20:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:12.424 20:00:09 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:12.424 20:00:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.424 20:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:12.424 20:00:09 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:15.723 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:15.723 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:15.723 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:15.723 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:15.723 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:15.723 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:15.723 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:15.724 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:15.724 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:15.724 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:15.724 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:15.724 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:15.984 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:15.984 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:15.984 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:15.984 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:15.984 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:16.246 20:00:13 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:16.246 20:00:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:16.246 20:00:13 -- common/autotest_common.sh@10 -- # set +x 00:05:16.246 20:00:13 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:16.246 20:00:13 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:16.246 20:00:13 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:16.246 20:00:13 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:16.246 20:00:13 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:16.246 20:00:13 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:16.246 20:00:13 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:16.246 20:00:13 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:16.246 20:00:13 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:16.246 20:00:13 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:16.246 20:00:13 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:16.507 20:00:13 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:16.507 20:00:13 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:16.507 20:00:13 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:16.507 20:00:13 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:16.507 20:00:13 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:05:16.507 20:00:13 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:16.507 20:00:13 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:16.507 20:00:13 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:16.507 20:00:13 -- common/autotest_common.sh@1593 -- # return 0 00:05:16.507 20:00:13 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:16.507 20:00:13 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:16.507 20:00:13 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:16.507 20:00:13 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:16.507 20:00:13 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:16.507 20:00:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:16.507 20:00:13 -- common/autotest_common.sh@10 -- # set +x 00:05:16.507 20:00:13 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:16.507 20:00:13 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:16.507 20:00:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.507 20:00:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.507 20:00:13 -- common/autotest_common.sh@10 -- # set +x 00:05:16.507 ************************************ 00:05:16.507 START TEST env 00:05:16.507 ************************************ 00:05:16.507 20:00:13 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:16.507 * Looking for test storage... 00:05:16.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:16.507 20:00:13 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:16.507 20:00:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.507 20:00:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.507 20:00:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.507 ************************************ 00:05:16.507 START TEST env_memory 00:05:16.507 ************************************ 00:05:16.507 20:00:13 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:16.507 00:05:16.507 00:05:16.507 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.507 http://cunit.sourceforge.net/ 00:05:16.507 00:05:16.507 00:05:16.507 Suite: memory 00:05:16.507 Test: alloc and free memory map ...[2024-07-15 20:00:13.932355] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:16.768 passed 00:05:16.768 Test: mem map translation ...[2024-07-15 20:00:13.958044] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:16.768 [2024-07-15 20:00:13.958081] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:16.768 [2024-07-15 20:00:13.958133] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:16.768 [2024-07-15 20:00:13.958141] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:16.768 passed 00:05:16.768 Test: mem map registration ...[2024-07-15 20:00:14.013544] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:16.768 [2024-07-15 20:00:14.013570] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:16.768 passed 00:05:16.768 Test: mem map adjacent registrations ...passed 00:05:16.768 00:05:16.768 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.768 suites 1 1 n/a 0 0 00:05:16.768 tests 4 4 4 0 0 00:05:16.768 asserts 152 152 152 0 n/a 00:05:16.768 00:05:16.768 Elapsed time = 0.193 seconds 00:05:16.768 00:05:16.768 real 0m0.207s 00:05:16.768 user 0m0.194s 00:05:16.768 sys 0m0.012s 00:05:16.768 20:00:14 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.768 20:00:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:16.768 ************************************ 00:05:16.768 END TEST env_memory 00:05:16.768 ************************************ 00:05:16.768 20:00:14 env -- common/autotest_common.sh@1142 -- # return 0 00:05:16.768 20:00:14 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:16.768 20:00:14 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.768 20:00:14 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.768 20:00:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.768 ************************************ 00:05:16.768 START TEST env_vtophys 00:05:16.768 ************************************ 00:05:16.768 20:00:14 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:16.768 EAL: lib.eal log level changed from notice to debug 00:05:16.768 EAL: Detected lcore 0 as core 0 on socket 0 00:05:16.768 EAL: Detected lcore 1 as core 1 on socket 0 00:05:16.768 EAL: Detected lcore 2 as core 2 on socket 0 00:05:16.768 EAL: Detected lcore 3 as core 3 on socket 0 00:05:16.768 EAL: Detected lcore 4 as core 4 on socket 0 00:05:16.768 EAL: Detected lcore 5 as core 5 on socket 0 00:05:16.768 EAL: Detected lcore 6 as core 6 on socket 0 00:05:16.768 EAL: Detected lcore 7 as core 7 on socket 0 00:05:16.768 EAL: Detected lcore 8 as core 8 on socket 0 00:05:16.768 EAL: Detected lcore 9 as core 9 on socket 0 00:05:16.768 EAL: Detected lcore 10 as core 10 on socket 0 00:05:16.768 EAL: Detected lcore 11 as core 11 on socket 0 00:05:16.769 EAL: Detected lcore 12 as core 12 on socket 0 00:05:16.769 EAL: Detected lcore 13 as core 13 on socket 0 00:05:16.769 EAL: Detected lcore 14 as core 14 on socket 0 00:05:16.769 EAL: Detected lcore 15 as core 15 on socket 0 00:05:16.769 EAL: Detected lcore 16 as core 16 on socket 0 00:05:16.769 EAL: Detected lcore 17 as core 17 on socket 0 00:05:16.769 EAL: Detected lcore 18 as core 18 on socket 0 00:05:16.769 EAL: Detected lcore 19 as core 19 on socket 0 00:05:16.769 EAL: Detected lcore 20 as core 20 on socket 0 00:05:16.769 EAL: Detected lcore 21 as core 21 on socket 0 00:05:16.769 EAL: Detected lcore 22 as core 22 on socket 0 00:05:16.769 EAL: Detected lcore 23 as core 23 on socket 0 00:05:16.769 EAL: Detected lcore 24 as core 24 on socket 0 00:05:16.769 EAL: Detected lcore 25 as core 25 on socket 0 00:05:16.769 EAL: Detected lcore 26 as core 26 on socket 0 00:05:16.769 EAL: Detected lcore 27 as core 27 on socket 0 00:05:16.769 EAL: Detected lcore 28 as core 28 on socket 0 00:05:16.769 EAL: Detected lcore 29 as core 29 on socket 0 00:05:16.769 EAL: Detected lcore 30 as core 30 on socket 0 00:05:16.769 EAL: Detected lcore 31 as core 31 on socket 0 00:05:16.769 EAL: Detected lcore 32 as core 32 on socket 0 00:05:16.769 EAL: Detected lcore 33 as core 33 on socket 0 00:05:16.769 EAL: Detected lcore 34 as core 34 on socket 0 00:05:16.769 EAL: Detected lcore 35 as core 35 on socket 0 00:05:16.769 EAL: Detected lcore 36 as core 0 on socket 1 00:05:16.769 EAL: Detected lcore 37 as core 1 on socket 1 00:05:16.769 EAL: Detected lcore 38 as core 2 on socket 1 00:05:16.769 EAL: Detected lcore 39 as core 3 on socket 1 00:05:16.769 EAL: Detected lcore 40 as core 4 on socket 1 00:05:16.769 EAL: Detected lcore 41 as core 5 on socket 1 00:05:16.769 EAL: Detected lcore 42 as core 6 on socket 1 00:05:16.769 EAL: Detected lcore 43 as core 7 on socket 1 00:05:16.769 EAL: Detected lcore 44 as core 8 on socket 1 00:05:16.769 EAL: Detected lcore 45 as core 9 on socket 1 00:05:16.769 EAL: Detected lcore 46 as core 10 on socket 1 00:05:16.769 EAL: Detected lcore 47 as core 11 on socket 1 00:05:16.769 EAL: Detected lcore 48 as core 12 on socket 1 00:05:16.769 EAL: Detected lcore 49 as core 13 on socket 1 00:05:16.769 EAL: Detected lcore 50 as core 14 on socket 1 00:05:16.769 EAL: Detected lcore 51 as core 15 on socket 1 00:05:16.769 EAL: Detected lcore 52 as core 16 on socket 1 00:05:16.769 EAL: Detected lcore 53 as core 17 on socket 1 00:05:16.769 EAL: Detected lcore 54 as core 18 on socket 1 00:05:16.769 EAL: Detected lcore 55 as core 19 on socket 1 00:05:16.769 EAL: Detected lcore 56 as core 20 on socket 1 00:05:16.769 EAL: Detected lcore 57 as core 21 on socket 1 00:05:16.769 EAL: Detected lcore 58 as core 22 on socket 1 00:05:16.769 EAL: Detected lcore 59 as core 23 on socket 1 00:05:16.769 EAL: Detected lcore 60 as core 24 on socket 1 00:05:16.769 EAL: Detected lcore 61 as core 25 on socket 1 00:05:16.769 EAL: Detected lcore 62 as core 26 on socket 1 00:05:16.769 EAL: Detected lcore 63 as core 27 on socket 1 00:05:16.769 EAL: Detected lcore 64 as core 28 on socket 1 00:05:16.769 EAL: Detected lcore 65 as core 29 on socket 1 00:05:16.769 EAL: Detected lcore 66 as core 30 on socket 1 00:05:16.769 EAL: Detected lcore 67 as core 31 on socket 1 00:05:16.769 EAL: Detected lcore 68 as core 32 on socket 1 00:05:16.769 EAL: Detected lcore 69 as core 33 on socket 1 00:05:16.769 EAL: Detected lcore 70 as core 34 on socket 1 00:05:16.769 EAL: Detected lcore 71 as core 35 on socket 1 00:05:16.769 EAL: Detected lcore 72 as core 0 on socket 0 00:05:16.769 EAL: Detected lcore 73 as core 1 on socket 0 00:05:16.769 EAL: Detected lcore 74 as core 2 on socket 0 00:05:16.769 EAL: Detected lcore 75 as core 3 on socket 0 00:05:16.769 EAL: Detected lcore 76 as core 4 on socket 0 00:05:16.769 EAL: Detected lcore 77 as core 5 on socket 0 00:05:16.769 EAL: Detected lcore 78 as core 6 on socket 0 00:05:16.769 EAL: Detected lcore 79 as core 7 on socket 0 00:05:16.769 EAL: Detected lcore 80 as core 8 on socket 0 00:05:16.769 EAL: Detected lcore 81 as core 9 on socket 0 00:05:16.769 EAL: Detected lcore 82 as core 10 on socket 0 00:05:16.769 EAL: Detected lcore 83 as core 11 on socket 0 00:05:16.769 EAL: Detected lcore 84 as core 12 on socket 0 00:05:16.769 EAL: Detected lcore 85 as core 13 on socket 0 00:05:16.769 EAL: Detected lcore 86 as core 14 on socket 0 00:05:16.769 EAL: Detected lcore 87 as core 15 on socket 0 00:05:16.769 EAL: Detected lcore 88 as core 16 on socket 0 00:05:16.769 EAL: Detected lcore 89 as core 17 on socket 0 00:05:16.769 EAL: Detected lcore 90 as core 18 on socket 0 00:05:16.769 EAL: Detected lcore 91 as core 19 on socket 0 00:05:16.769 EAL: Detected lcore 92 as core 20 on socket 0 00:05:16.769 EAL: Detected lcore 93 as core 21 on socket 0 00:05:16.769 EAL: Detected lcore 94 as core 22 on socket 0 00:05:16.769 EAL: Detected lcore 95 as core 23 on socket 0 00:05:16.769 EAL: Detected lcore 96 as core 24 on socket 0 00:05:16.769 EAL: Detected lcore 97 as core 25 on socket 0 00:05:16.769 EAL: Detected lcore 98 as core 26 on socket 0 00:05:16.769 EAL: Detected lcore 99 as core 27 on socket 0 00:05:16.769 EAL: Detected lcore 100 as core 28 on socket 0 00:05:16.769 EAL: Detected lcore 101 as core 29 on socket 0 00:05:16.769 EAL: Detected lcore 102 as core 30 on socket 0 00:05:16.769 EAL: Detected lcore 103 as core 31 on socket 0 00:05:16.769 EAL: Detected lcore 104 as core 32 on socket 0 00:05:16.769 EAL: Detected lcore 105 as core 33 on socket 0 00:05:16.769 EAL: Detected lcore 106 as core 34 on socket 0 00:05:16.769 EAL: Detected lcore 107 as core 35 on socket 0 00:05:16.769 EAL: Detected lcore 108 as core 0 on socket 1 00:05:16.769 EAL: Detected lcore 109 as core 1 on socket 1 00:05:16.769 EAL: Detected lcore 110 as core 2 on socket 1 00:05:16.769 EAL: Detected lcore 111 as core 3 on socket 1 00:05:16.769 EAL: Detected lcore 112 as core 4 on socket 1 00:05:16.769 EAL: Detected lcore 113 as core 5 on socket 1 00:05:16.769 EAL: Detected lcore 114 as core 6 on socket 1 00:05:16.769 EAL: Detected lcore 115 as core 7 on socket 1 00:05:16.769 EAL: Detected lcore 116 as core 8 on socket 1 00:05:16.769 EAL: Detected lcore 117 as core 9 on socket 1 00:05:16.769 EAL: Detected lcore 118 as core 10 on socket 1 00:05:16.769 EAL: Detected lcore 119 as core 11 on socket 1 00:05:16.769 EAL: Detected lcore 120 as core 12 on socket 1 00:05:16.769 EAL: Detected lcore 121 as core 13 on socket 1 00:05:16.769 EAL: Detected lcore 122 as core 14 on socket 1 00:05:16.769 EAL: Detected lcore 123 as core 15 on socket 1 00:05:16.769 EAL: Detected lcore 124 as core 16 on socket 1 00:05:16.769 EAL: Detected lcore 125 as core 17 on socket 1 00:05:16.769 EAL: Detected lcore 126 as core 18 on socket 1 00:05:16.769 EAL: Detected lcore 127 as core 19 on socket 1 00:05:16.769 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:16.769 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:16.769 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:16.769 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:16.769 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:16.769 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:16.769 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:16.769 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:16.769 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:16.769 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:16.769 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:16.769 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:16.769 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:16.769 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:16.769 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:16.769 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:16.769 EAL: Maximum logical cores by configuration: 128 00:05:16.769 EAL: Detected CPU lcores: 128 00:05:16.769 EAL: Detected NUMA nodes: 2 00:05:16.769 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:16.769 EAL: Detected shared linkage of DPDK 00:05:16.769 EAL: No shared files mode enabled, IPC will be disabled 00:05:17.054 EAL: Bus pci wants IOVA as 'DC' 00:05:17.054 EAL: Buses did not request a specific IOVA mode. 00:05:17.054 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:17.054 EAL: Selected IOVA mode 'VA' 00:05:17.054 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.054 EAL: Probing VFIO support... 00:05:17.054 EAL: IOMMU type 1 (Type 1) is supported 00:05:17.054 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:17.054 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:17.054 EAL: VFIO support initialized 00:05:17.054 EAL: Ask a virtual area of 0x2e000 bytes 00:05:17.054 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:17.054 EAL: Setting up physically contiguous memory... 00:05:17.054 EAL: Setting maximum number of open files to 524288 00:05:17.054 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:17.054 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:17.054 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:17.054 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.054 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:17.054 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:17.054 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.054 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:17.054 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:17.054 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.054 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:17.054 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:17.054 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.054 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:17.054 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:17.054 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.054 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:17.054 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:17.054 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.054 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:17.054 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:17.054 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.054 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:17.054 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:17.054 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.054 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:17.054 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:17.054 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:17.054 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.054 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:17.054 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:17.054 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.054 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:17.054 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:17.054 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.054 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:17.054 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:17.054 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.054 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:17.054 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:17.054 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.054 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:17.054 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:17.054 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.054 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:17.054 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:17.054 EAL: Ask a virtual area of 0x61000 bytes 00:05:17.054 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:17.054 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:17.054 EAL: Ask a virtual area of 0x400000000 bytes 00:05:17.054 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:17.054 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:17.054 EAL: Hugepages will be freed exactly as allocated. 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: TSC frequency is ~2400000 KHz 00:05:17.054 EAL: Main lcore 0 is ready (tid=7fd6139eda00;cpuset=[0]) 00:05:17.054 EAL: Trying to obtain current memory policy. 00:05:17.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.054 EAL: Restoring previous memory policy: 0 00:05:17.054 EAL: request: mp_malloc_sync 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: Heap on socket 0 was expanded by 2MB 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:17.054 EAL: Mem event callback 'spdk:(nil)' registered 00:05:17.054 00:05:17.054 00:05:17.054 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.054 http://cunit.sourceforge.net/ 00:05:17.054 00:05:17.054 00:05:17.054 Suite: components_suite 00:05:17.054 Test: vtophys_malloc_test ...passed 00:05:17.054 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:17.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.054 EAL: Restoring previous memory policy: 4 00:05:17.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.054 EAL: request: mp_malloc_sync 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: Heap on socket 0 was expanded by 4MB 00:05:17.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.054 EAL: request: mp_malloc_sync 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: Heap on socket 0 was shrunk by 4MB 00:05:17.054 EAL: Trying to obtain current memory policy. 00:05:17.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.054 EAL: Restoring previous memory policy: 4 00:05:17.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.054 EAL: request: mp_malloc_sync 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: Heap on socket 0 was expanded by 6MB 00:05:17.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.054 EAL: request: mp_malloc_sync 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: Heap on socket 0 was shrunk by 6MB 00:05:17.054 EAL: Trying to obtain current memory policy. 00:05:17.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.054 EAL: Restoring previous memory policy: 4 00:05:17.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.054 EAL: request: mp_malloc_sync 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: Heap on socket 0 was expanded by 10MB 00:05:17.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.054 EAL: request: mp_malloc_sync 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: Heap on socket 0 was shrunk by 10MB 00:05:17.054 EAL: Trying to obtain current memory policy. 00:05:17.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.054 EAL: Restoring previous memory policy: 4 00:05:17.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.054 EAL: request: mp_malloc_sync 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: Heap on socket 0 was expanded by 18MB 00:05:17.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.054 EAL: request: mp_malloc_sync 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: Heap on socket 0 was shrunk by 18MB 00:05:17.054 EAL: Trying to obtain current memory policy. 00:05:17.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.054 EAL: Restoring previous memory policy: 4 00:05:17.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.054 EAL: request: mp_malloc_sync 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: Heap on socket 0 was expanded by 34MB 00:05:17.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.054 EAL: request: mp_malloc_sync 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: Heap on socket 0 was shrunk by 34MB 00:05:17.054 EAL: Trying to obtain current memory policy. 00:05:17.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.054 EAL: Restoring previous memory policy: 4 00:05:17.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.054 EAL: request: mp_malloc_sync 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: Heap on socket 0 was expanded by 66MB 00:05:17.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.054 EAL: request: mp_malloc_sync 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: Heap on socket 0 was shrunk by 66MB 00:05:17.054 EAL: Trying to obtain current memory policy. 00:05:17.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.054 EAL: Restoring previous memory policy: 4 00:05:17.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.054 EAL: request: mp_malloc_sync 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: Heap on socket 0 was expanded by 130MB 00:05:17.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.054 EAL: request: mp_malloc_sync 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: Heap on socket 0 was shrunk by 130MB 00:05:17.054 EAL: Trying to obtain current memory policy. 00:05:17.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.054 EAL: Restoring previous memory policy: 4 00:05:17.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.054 EAL: request: mp_malloc_sync 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: Heap on socket 0 was expanded by 258MB 00:05:17.054 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.054 EAL: request: mp_malloc_sync 00:05:17.054 EAL: No shared files mode enabled, IPC is disabled 00:05:17.054 EAL: Heap on socket 0 was shrunk by 258MB 00:05:17.054 EAL: Trying to obtain current memory policy. 00:05:17.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.315 EAL: Restoring previous memory policy: 4 00:05:17.315 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.315 EAL: request: mp_malloc_sync 00:05:17.315 EAL: No shared files mode enabled, IPC is disabled 00:05:17.315 EAL: Heap on socket 0 was expanded by 514MB 00:05:17.315 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.315 EAL: request: mp_malloc_sync 00:05:17.315 EAL: No shared files mode enabled, IPC is disabled 00:05:17.315 EAL: Heap on socket 0 was shrunk by 514MB 00:05:17.315 EAL: Trying to obtain current memory policy. 00:05:17.315 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.576 EAL: Restoring previous memory policy: 4 00:05:17.576 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.576 EAL: request: mp_malloc_sync 00:05:17.576 EAL: No shared files mode enabled, IPC is disabled 00:05:17.576 EAL: Heap on socket 0 was expanded by 1026MB 00:05:17.576 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.576 EAL: request: mp_malloc_sync 00:05:17.576 EAL: No shared files mode enabled, IPC is disabled 00:05:17.576 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:17.576 passed 00:05:17.576 00:05:17.576 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.576 suites 1 1 n/a 0 0 00:05:17.576 tests 2 2 2 0 0 00:05:17.576 asserts 497 497 497 0 n/a 00:05:17.576 00:05:17.576 Elapsed time = 0.656 seconds 00:05:17.576 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.576 EAL: request: mp_malloc_sync 00:05:17.576 EAL: No shared files mode enabled, IPC is disabled 00:05:17.576 EAL: Heap on socket 0 was shrunk by 2MB 00:05:17.576 EAL: No shared files mode enabled, IPC is disabled 00:05:17.576 EAL: No shared files mode enabled, IPC is disabled 00:05:17.576 EAL: No shared files mode enabled, IPC is disabled 00:05:17.576 00:05:17.576 real 0m0.781s 00:05:17.576 user 0m0.409s 00:05:17.576 sys 0m0.342s 00:05:17.576 20:00:14 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.576 20:00:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:17.576 ************************************ 00:05:17.576 END TEST env_vtophys 00:05:17.576 ************************************ 00:05:17.576 20:00:14 env -- common/autotest_common.sh@1142 -- # return 0 00:05:17.576 20:00:14 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:17.576 20:00:14 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.576 20:00:14 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.576 20:00:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.837 ************************************ 00:05:17.837 START TEST env_pci 00:05:17.837 ************************************ 00:05:17.837 20:00:15 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:17.837 00:05:17.837 00:05:17.837 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.837 http://cunit.sourceforge.net/ 00:05:17.837 00:05:17.837 00:05:17.837 Suite: pci 00:05:17.837 Test: pci_hook ...[2024-07-15 20:00:15.042027] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 767936 has claimed it 00:05:17.837 EAL: Cannot find device (10000:00:01.0) 00:05:17.837 EAL: Failed to attach device on primary process 00:05:17.837 passed 00:05:17.837 00:05:17.837 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.837 suites 1 1 n/a 0 0 00:05:17.837 tests 1 1 1 0 0 00:05:17.837 asserts 25 25 25 0 n/a 00:05:17.837 00:05:17.837 Elapsed time = 0.035 seconds 00:05:17.837 00:05:17.837 real 0m0.056s 00:05:17.837 user 0m0.013s 00:05:17.837 sys 0m0.043s 00:05:17.837 20:00:15 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.837 20:00:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:17.837 ************************************ 00:05:17.837 END TEST env_pci 00:05:17.837 ************************************ 00:05:17.837 20:00:15 env -- common/autotest_common.sh@1142 -- # return 0 00:05:17.837 20:00:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:17.837 20:00:15 env -- env/env.sh@15 -- # uname 00:05:17.837 20:00:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:17.837 20:00:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:17.837 20:00:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:17.837 20:00:15 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:17.837 20:00:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.837 20:00:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.837 ************************************ 00:05:17.837 START TEST env_dpdk_post_init 00:05:17.837 ************************************ 00:05:17.837 20:00:15 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:17.837 EAL: Detected CPU lcores: 128 00:05:17.837 EAL: Detected NUMA nodes: 2 00:05:17.837 EAL: Detected shared linkage of DPDK 00:05:17.837 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:17.837 EAL: Selected IOVA mode 'VA' 00:05:17.837 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.837 EAL: VFIO support initialized 00:05:17.837 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:18.097 EAL: Using IOMMU type 1 (Type 1) 00:05:18.097 EAL: Ignore mapping IO port bar(1) 00:05:18.097 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:18.357 EAL: Ignore mapping IO port bar(1) 00:05:18.357 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:18.627 EAL: Ignore mapping IO port bar(1) 00:05:18.627 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:18.887 EAL: Ignore mapping IO port bar(1) 00:05:18.887 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:18.887 EAL: Ignore mapping IO port bar(1) 00:05:19.146 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:19.146 EAL: Ignore mapping IO port bar(1) 00:05:19.406 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:19.406 EAL: Ignore mapping IO port bar(1) 00:05:19.406 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:19.665 EAL: Ignore mapping IO port bar(1) 00:05:19.665 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:19.924 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:20.185 EAL: Ignore mapping IO port bar(1) 00:05:20.185 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:20.445 EAL: Ignore mapping IO port bar(1) 00:05:20.445 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:20.445 EAL: Ignore mapping IO port bar(1) 00:05:20.705 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:20.705 EAL: Ignore mapping IO port bar(1) 00:05:20.965 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:20.965 EAL: Ignore mapping IO port bar(1) 00:05:21.226 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:21.226 EAL: Ignore mapping IO port bar(1) 00:05:21.226 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:21.487 EAL: Ignore mapping IO port bar(1) 00:05:21.487 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:21.748 EAL: Ignore mapping IO port bar(1) 00:05:21.748 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:21.748 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:21.748 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:22.008 Starting DPDK initialization... 00:05:22.008 Starting SPDK post initialization... 00:05:22.008 SPDK NVMe probe 00:05:22.008 Attaching to 0000:65:00.0 00:05:22.008 Attached to 0000:65:00.0 00:05:22.008 Cleaning up... 00:05:23.920 00:05:23.920 real 0m5.711s 00:05:23.920 user 0m0.178s 00:05:23.920 sys 0m0.077s 00:05:23.920 20:00:20 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.920 20:00:20 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:23.920 ************************************ 00:05:23.920 END TEST env_dpdk_post_init 00:05:23.920 ************************************ 00:05:23.920 20:00:20 env -- common/autotest_common.sh@1142 -- # return 0 00:05:23.920 20:00:20 env -- env/env.sh@26 -- # uname 00:05:23.920 20:00:20 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:23.920 20:00:20 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:23.920 20:00:20 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.920 20:00:20 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.920 20:00:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.920 ************************************ 00:05:23.920 START TEST env_mem_callbacks 00:05:23.920 ************************************ 00:05:23.920 20:00:20 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:23.920 EAL: Detected CPU lcores: 128 00:05:23.920 EAL: Detected NUMA nodes: 2 00:05:23.920 EAL: Detected shared linkage of DPDK 00:05:23.920 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:23.920 EAL: Selected IOVA mode 'VA' 00:05:23.920 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.920 EAL: VFIO support initialized 00:05:23.920 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:23.920 00:05:23.920 00:05:23.920 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.920 http://cunit.sourceforge.net/ 00:05:23.920 00:05:23.920 00:05:23.920 Suite: memory 00:05:23.920 Test: test ... 00:05:23.920 register 0x200000200000 2097152 00:05:23.920 malloc 3145728 00:05:23.920 register 0x200000400000 4194304 00:05:23.920 buf 0x200000500000 len 3145728 PASSED 00:05:23.920 malloc 64 00:05:23.920 buf 0x2000004fff40 len 64 PASSED 00:05:23.920 malloc 4194304 00:05:23.920 register 0x200000800000 6291456 00:05:23.920 buf 0x200000a00000 len 4194304 PASSED 00:05:23.920 free 0x200000500000 3145728 00:05:23.920 free 0x2000004fff40 64 00:05:23.920 unregister 0x200000400000 4194304 PASSED 00:05:23.920 free 0x200000a00000 4194304 00:05:23.920 unregister 0x200000800000 6291456 PASSED 00:05:23.920 malloc 8388608 00:05:23.920 register 0x200000400000 10485760 00:05:23.920 buf 0x200000600000 len 8388608 PASSED 00:05:23.920 free 0x200000600000 8388608 00:05:23.920 unregister 0x200000400000 10485760 PASSED 00:05:23.920 passed 00:05:23.920 00:05:23.920 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.920 suites 1 1 n/a 0 0 00:05:23.920 tests 1 1 1 0 0 00:05:23.920 asserts 15 15 15 0 n/a 00:05:23.920 00:05:23.920 Elapsed time = 0.004 seconds 00:05:23.920 00:05:23.920 real 0m0.057s 00:05:23.920 user 0m0.021s 00:05:23.920 sys 0m0.036s 00:05:23.920 20:00:21 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.920 20:00:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:23.920 ************************************ 00:05:23.920 END TEST env_mem_callbacks 00:05:23.920 ************************************ 00:05:23.920 20:00:21 env -- common/autotest_common.sh@1142 -- # return 0 00:05:23.920 00:05:23.920 real 0m7.295s 00:05:23.920 user 0m0.992s 00:05:23.920 sys 0m0.840s 00:05:23.920 20:00:21 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.920 20:00:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.920 ************************************ 00:05:23.920 END TEST env 00:05:23.920 ************************************ 00:05:23.920 20:00:21 -- common/autotest_common.sh@1142 -- # return 0 00:05:23.920 20:00:21 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:23.920 20:00:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.920 20:00:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.920 20:00:21 -- common/autotest_common.sh@10 -- # set +x 00:05:23.921 ************************************ 00:05:23.921 START TEST rpc 00:05:23.921 ************************************ 00:05:23.921 20:00:21 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:23.921 * Looking for test storage... 00:05:23.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:23.921 20:00:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=769381 00:05:23.921 20:00:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.921 20:00:21 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:23.921 20:00:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 769381 00:05:23.921 20:00:21 rpc -- common/autotest_common.sh@829 -- # '[' -z 769381 ']' 00:05:23.921 20:00:21 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.921 20:00:21 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.921 20:00:21 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.921 20:00:21 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.921 20:00:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.921 [2024-07-15 20:00:21.275384] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:05:23.921 [2024-07-15 20:00:21.275453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769381 ] 00:05:23.921 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.921 [2024-07-15 20:00:21.338868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.181 [2024-07-15 20:00:21.414150] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:24.181 [2024-07-15 20:00:21.414186] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 769381' to capture a snapshot of events at runtime. 00:05:24.181 [2024-07-15 20:00:21.414193] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:24.181 [2024-07-15 20:00:21.414200] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:24.181 [2024-07-15 20:00:21.414205] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid769381 for offline analysis/debug. 00:05:24.181 [2024-07-15 20:00:21.414226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.751 20:00:22 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.751 20:00:22 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:24.752 20:00:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:24.752 20:00:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:24.752 20:00:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:24.752 20:00:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:24.752 20:00:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.752 20:00:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.752 20:00:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.752 ************************************ 00:05:24.752 START TEST rpc_integrity 00:05:24.752 ************************************ 00:05:24.752 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:24.752 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:24.752 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.752 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.752 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.752 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:24.752 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:24.752 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:24.752 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:24.752 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.752 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.752 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.752 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:24.752 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:24.752 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.752 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.752 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.752 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:24.752 { 00:05:24.752 "name": "Malloc0", 00:05:24.752 "aliases": [ 00:05:24.752 "125e42bf-fc60-4a83-b137-64d693292f8f" 00:05:24.752 ], 00:05:24.752 "product_name": "Malloc disk", 00:05:24.752 "block_size": 512, 00:05:24.752 "num_blocks": 16384, 00:05:24.752 "uuid": "125e42bf-fc60-4a83-b137-64d693292f8f", 00:05:24.752 "assigned_rate_limits": { 00:05:24.752 "rw_ios_per_sec": 0, 00:05:24.752 "rw_mbytes_per_sec": 0, 00:05:24.752 "r_mbytes_per_sec": 0, 00:05:24.752 "w_mbytes_per_sec": 0 00:05:24.752 }, 00:05:24.752 "claimed": false, 00:05:24.752 "zoned": false, 00:05:24.752 "supported_io_types": { 00:05:24.752 "read": true, 00:05:24.752 "write": true, 00:05:24.752 "unmap": true, 00:05:24.752 "flush": true, 00:05:24.752 "reset": true, 00:05:24.752 "nvme_admin": false, 00:05:24.752 "nvme_io": false, 00:05:24.752 "nvme_io_md": false, 00:05:24.752 "write_zeroes": true, 00:05:24.752 "zcopy": true, 00:05:24.752 "get_zone_info": false, 00:05:24.752 "zone_management": false, 00:05:24.752 "zone_append": false, 00:05:24.752 "compare": false, 00:05:24.752 "compare_and_write": false, 00:05:24.752 "abort": true, 00:05:24.752 "seek_hole": false, 00:05:24.752 "seek_data": false, 00:05:24.752 "copy": true, 00:05:24.752 "nvme_iov_md": false 00:05:24.752 }, 00:05:24.752 "memory_domains": [ 00:05:24.752 { 00:05:24.752 "dma_device_id": "system", 00:05:24.752 "dma_device_type": 1 00:05:24.752 }, 00:05:24.752 { 00:05:24.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.752 "dma_device_type": 2 00:05:24.752 } 00:05:24.752 ], 00:05:24.752 "driver_specific": {} 00:05:24.752 } 00:05:24.752 ]' 00:05:24.752 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:25.012 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:25.012 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:25.012 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.012 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.012 [2024-07-15 20:00:22.216080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:25.012 [2024-07-15 20:00:22.216112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:25.012 [2024-07-15 20:00:22.216129] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6b5d80 00:05:25.012 [2024-07-15 20:00:22.216136] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:25.012 [2024-07-15 20:00:22.217463] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:25.012 [2024-07-15 20:00:22.217483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:25.012 Passthru0 00:05:25.012 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.012 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:25.012 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.012 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.012 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.012 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:25.012 { 00:05:25.012 "name": "Malloc0", 00:05:25.012 "aliases": [ 00:05:25.012 "125e42bf-fc60-4a83-b137-64d693292f8f" 00:05:25.012 ], 00:05:25.012 "product_name": "Malloc disk", 00:05:25.012 "block_size": 512, 00:05:25.012 "num_blocks": 16384, 00:05:25.012 "uuid": "125e42bf-fc60-4a83-b137-64d693292f8f", 00:05:25.012 "assigned_rate_limits": { 00:05:25.012 "rw_ios_per_sec": 0, 00:05:25.012 "rw_mbytes_per_sec": 0, 00:05:25.012 "r_mbytes_per_sec": 0, 00:05:25.012 "w_mbytes_per_sec": 0 00:05:25.012 }, 00:05:25.012 "claimed": true, 00:05:25.012 "claim_type": "exclusive_write", 00:05:25.012 "zoned": false, 00:05:25.012 "supported_io_types": { 00:05:25.012 "read": true, 00:05:25.012 "write": true, 00:05:25.012 "unmap": true, 00:05:25.012 "flush": true, 00:05:25.012 "reset": true, 00:05:25.012 "nvme_admin": false, 00:05:25.012 "nvme_io": false, 00:05:25.012 "nvme_io_md": false, 00:05:25.012 "write_zeroes": true, 00:05:25.012 "zcopy": true, 00:05:25.012 "get_zone_info": false, 00:05:25.012 "zone_management": false, 00:05:25.012 "zone_append": false, 00:05:25.012 "compare": false, 00:05:25.012 "compare_and_write": false, 00:05:25.012 "abort": true, 00:05:25.012 "seek_hole": false, 00:05:25.012 "seek_data": false, 00:05:25.012 "copy": true, 00:05:25.012 "nvme_iov_md": false 00:05:25.012 }, 00:05:25.012 "memory_domains": [ 00:05:25.012 { 00:05:25.012 "dma_device_id": "system", 00:05:25.012 "dma_device_type": 1 00:05:25.012 }, 00:05:25.012 { 00:05:25.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.012 "dma_device_type": 2 00:05:25.012 } 00:05:25.012 ], 00:05:25.012 "driver_specific": {} 00:05:25.012 }, 00:05:25.012 { 00:05:25.012 "name": "Passthru0", 00:05:25.012 "aliases": [ 00:05:25.012 "9aecba9f-e853-56ae-9a9e-0a68ddd25264" 00:05:25.012 ], 00:05:25.012 "product_name": "passthru", 00:05:25.012 "block_size": 512, 00:05:25.012 "num_blocks": 16384, 00:05:25.012 "uuid": "9aecba9f-e853-56ae-9a9e-0a68ddd25264", 00:05:25.012 "assigned_rate_limits": { 00:05:25.012 "rw_ios_per_sec": 0, 00:05:25.012 "rw_mbytes_per_sec": 0, 00:05:25.012 "r_mbytes_per_sec": 0, 00:05:25.012 "w_mbytes_per_sec": 0 00:05:25.012 }, 00:05:25.012 "claimed": false, 00:05:25.012 "zoned": false, 00:05:25.012 "supported_io_types": { 00:05:25.012 "read": true, 00:05:25.012 "write": true, 00:05:25.012 "unmap": true, 00:05:25.012 "flush": true, 00:05:25.012 "reset": true, 00:05:25.012 "nvme_admin": false, 00:05:25.012 "nvme_io": false, 00:05:25.012 "nvme_io_md": false, 00:05:25.012 "write_zeroes": true, 00:05:25.012 "zcopy": true, 00:05:25.012 "get_zone_info": false, 00:05:25.012 "zone_management": false, 00:05:25.012 "zone_append": false, 00:05:25.012 "compare": false, 00:05:25.012 "compare_and_write": false, 00:05:25.012 "abort": true, 00:05:25.012 "seek_hole": false, 00:05:25.012 "seek_data": false, 00:05:25.012 "copy": true, 00:05:25.012 "nvme_iov_md": false 00:05:25.012 }, 00:05:25.012 "memory_domains": [ 00:05:25.012 { 00:05:25.013 "dma_device_id": "system", 00:05:25.013 "dma_device_type": 1 00:05:25.013 }, 00:05:25.013 { 00:05:25.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.013 "dma_device_type": 2 00:05:25.013 } 00:05:25.013 ], 00:05:25.013 "driver_specific": { 00:05:25.013 "passthru": { 00:05:25.013 "name": "Passthru0", 00:05:25.013 "base_bdev_name": "Malloc0" 00:05:25.013 } 00:05:25.013 } 00:05:25.013 } 00:05:25.013 ]' 00:05:25.013 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:25.013 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:25.013 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:25.013 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.013 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.013 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.013 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:25.013 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.013 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.013 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.013 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:25.013 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.013 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.013 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.013 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:25.013 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:25.013 20:00:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:25.013 00:05:25.013 real 0m0.301s 00:05:25.013 user 0m0.187s 00:05:25.013 sys 0m0.047s 00:05:25.013 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.013 20:00:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.013 ************************************ 00:05:25.013 END TEST rpc_integrity 00:05:25.013 ************************************ 00:05:25.013 20:00:22 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:25.013 20:00:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:25.013 20:00:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.013 20:00:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.013 20:00:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.273 ************************************ 00:05:25.273 START TEST rpc_plugins 00:05:25.273 ************************************ 00:05:25.273 20:00:22 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:25.273 20:00:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:25.273 20:00:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.273 20:00:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.273 20:00:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.273 20:00:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:25.273 20:00:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:25.273 20:00:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.274 20:00:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.274 20:00:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.274 20:00:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:25.274 { 00:05:25.274 "name": "Malloc1", 00:05:25.274 "aliases": [ 00:05:25.274 "2b1f0226-b0de-4d50-affa-2134118dd90e" 00:05:25.274 ], 00:05:25.274 "product_name": "Malloc disk", 00:05:25.274 "block_size": 4096, 00:05:25.274 "num_blocks": 256, 00:05:25.274 "uuid": "2b1f0226-b0de-4d50-affa-2134118dd90e", 00:05:25.274 "assigned_rate_limits": { 00:05:25.274 "rw_ios_per_sec": 0, 00:05:25.274 "rw_mbytes_per_sec": 0, 00:05:25.274 "r_mbytes_per_sec": 0, 00:05:25.274 "w_mbytes_per_sec": 0 00:05:25.274 }, 00:05:25.274 "claimed": false, 00:05:25.274 "zoned": false, 00:05:25.274 "supported_io_types": { 00:05:25.274 "read": true, 00:05:25.274 "write": true, 00:05:25.274 "unmap": true, 00:05:25.274 "flush": true, 00:05:25.274 "reset": true, 00:05:25.274 "nvme_admin": false, 00:05:25.274 "nvme_io": false, 00:05:25.274 "nvme_io_md": false, 00:05:25.274 "write_zeroes": true, 00:05:25.274 "zcopy": true, 00:05:25.274 "get_zone_info": false, 00:05:25.274 "zone_management": false, 00:05:25.274 "zone_append": false, 00:05:25.274 "compare": false, 00:05:25.274 "compare_and_write": false, 00:05:25.274 "abort": true, 00:05:25.274 "seek_hole": false, 00:05:25.274 "seek_data": false, 00:05:25.274 "copy": true, 00:05:25.274 "nvme_iov_md": false 00:05:25.274 }, 00:05:25.274 "memory_domains": [ 00:05:25.274 { 00:05:25.274 "dma_device_id": "system", 00:05:25.274 "dma_device_type": 1 00:05:25.274 }, 00:05:25.274 { 00:05:25.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.274 "dma_device_type": 2 00:05:25.274 } 00:05:25.274 ], 00:05:25.274 "driver_specific": {} 00:05:25.274 } 00:05:25.274 ]' 00:05:25.274 20:00:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:25.274 20:00:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:25.274 20:00:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:25.274 20:00:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.274 20:00:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.274 20:00:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.274 20:00:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:25.274 20:00:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.274 20:00:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.274 20:00:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.274 20:00:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:25.274 20:00:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:25.274 20:00:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:25.274 00:05:25.274 real 0m0.152s 00:05:25.274 user 0m0.086s 00:05:25.274 sys 0m0.027s 00:05:25.274 20:00:22 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.274 20:00:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.274 ************************************ 00:05:25.274 END TEST rpc_plugins 00:05:25.274 ************************************ 00:05:25.274 20:00:22 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:25.274 20:00:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:25.274 20:00:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.274 20:00:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.274 20:00:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.274 ************************************ 00:05:25.274 START TEST rpc_trace_cmd_test 00:05:25.274 ************************************ 00:05:25.274 20:00:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:25.274 20:00:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:25.274 20:00:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:25.274 20:00:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.274 20:00:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:25.274 20:00:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.274 20:00:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:25.274 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid769381", 00:05:25.274 "tpoint_group_mask": "0x8", 00:05:25.274 "iscsi_conn": { 00:05:25.274 "mask": "0x2", 00:05:25.274 "tpoint_mask": "0x0" 00:05:25.274 }, 00:05:25.274 "scsi": { 00:05:25.274 "mask": "0x4", 00:05:25.274 "tpoint_mask": "0x0" 00:05:25.274 }, 00:05:25.274 "bdev": { 00:05:25.274 "mask": "0x8", 00:05:25.274 "tpoint_mask": "0xffffffffffffffff" 00:05:25.274 }, 00:05:25.274 "nvmf_rdma": { 00:05:25.274 "mask": "0x10", 00:05:25.274 "tpoint_mask": "0x0" 00:05:25.274 }, 00:05:25.274 "nvmf_tcp": { 00:05:25.274 "mask": "0x20", 00:05:25.274 "tpoint_mask": "0x0" 00:05:25.274 }, 00:05:25.274 "ftl": { 00:05:25.274 "mask": "0x40", 00:05:25.274 "tpoint_mask": "0x0" 00:05:25.274 }, 00:05:25.274 "blobfs": { 00:05:25.274 "mask": "0x80", 00:05:25.274 "tpoint_mask": "0x0" 00:05:25.274 }, 00:05:25.274 "dsa": { 00:05:25.274 "mask": "0x200", 00:05:25.274 "tpoint_mask": "0x0" 00:05:25.274 }, 00:05:25.274 "thread": { 00:05:25.274 "mask": "0x400", 00:05:25.274 "tpoint_mask": "0x0" 00:05:25.274 }, 00:05:25.274 "nvme_pcie": { 00:05:25.274 "mask": "0x800", 00:05:25.274 "tpoint_mask": "0x0" 00:05:25.274 }, 00:05:25.274 "iaa": { 00:05:25.274 "mask": "0x1000", 00:05:25.274 "tpoint_mask": "0x0" 00:05:25.274 }, 00:05:25.274 "nvme_tcp": { 00:05:25.274 "mask": "0x2000", 00:05:25.274 "tpoint_mask": "0x0" 00:05:25.274 }, 00:05:25.274 "bdev_nvme": { 00:05:25.274 "mask": "0x4000", 00:05:25.274 "tpoint_mask": "0x0" 00:05:25.274 }, 00:05:25.274 "sock": { 00:05:25.274 "mask": "0x8000", 00:05:25.274 "tpoint_mask": "0x0" 00:05:25.274 } 00:05:25.274 }' 00:05:25.274 20:00:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:25.570 20:00:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:25.570 20:00:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:25.570 20:00:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:25.570 20:00:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:25.570 20:00:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:25.570 20:00:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:25.570 20:00:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:25.570 20:00:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:25.570 20:00:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:25.570 00:05:25.570 real 0m0.228s 00:05:25.570 user 0m0.194s 00:05:25.570 sys 0m0.026s 00:05:25.570 20:00:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.570 20:00:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:25.570 ************************************ 00:05:25.570 END TEST rpc_trace_cmd_test 00:05:25.570 ************************************ 00:05:25.570 20:00:22 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:25.570 20:00:22 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:25.570 20:00:22 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:25.570 20:00:22 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:25.570 20:00:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.570 20:00:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.570 20:00:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.570 ************************************ 00:05:25.570 START TEST rpc_daemon_integrity 00:05:25.570 ************************************ 00:05:25.570 20:00:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:25.570 20:00:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:25.570 20:00:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.570 20:00:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.570 20:00:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.570 20:00:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:25.570 20:00:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:25.832 20:00:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:25.832 20:00:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:25.832 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.832 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.832 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.832 20:00:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:25.832 20:00:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:25.832 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.832 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.832 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.832 20:00:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:25.832 { 00:05:25.832 "name": "Malloc2", 00:05:25.832 "aliases": [ 00:05:25.832 "82cd90dd-65d6-4501-ab1b-97eae62e27f6" 00:05:25.832 ], 00:05:25.832 "product_name": "Malloc disk", 00:05:25.832 "block_size": 512, 00:05:25.832 "num_blocks": 16384, 00:05:25.832 "uuid": "82cd90dd-65d6-4501-ab1b-97eae62e27f6", 00:05:25.832 "assigned_rate_limits": { 00:05:25.832 "rw_ios_per_sec": 0, 00:05:25.832 "rw_mbytes_per_sec": 0, 00:05:25.832 "r_mbytes_per_sec": 0, 00:05:25.832 "w_mbytes_per_sec": 0 00:05:25.832 }, 00:05:25.832 "claimed": false, 00:05:25.832 "zoned": false, 00:05:25.832 "supported_io_types": { 00:05:25.832 "read": true, 00:05:25.832 "write": true, 00:05:25.832 "unmap": true, 00:05:25.832 "flush": true, 00:05:25.832 "reset": true, 00:05:25.832 "nvme_admin": false, 00:05:25.832 "nvme_io": false, 00:05:25.832 "nvme_io_md": false, 00:05:25.832 "write_zeroes": true, 00:05:25.832 "zcopy": true, 00:05:25.832 "get_zone_info": false, 00:05:25.832 "zone_management": false, 00:05:25.832 "zone_append": false, 00:05:25.832 "compare": false, 00:05:25.832 "compare_and_write": false, 00:05:25.832 "abort": true, 00:05:25.832 "seek_hole": false, 00:05:25.832 "seek_data": false, 00:05:25.832 "copy": true, 00:05:25.832 "nvme_iov_md": false 00:05:25.832 }, 00:05:25.832 "memory_domains": [ 00:05:25.832 { 00:05:25.832 "dma_device_id": "system", 00:05:25.832 "dma_device_type": 1 00:05:25.832 }, 00:05:25.832 { 00:05:25.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.832 "dma_device_type": 2 00:05:25.832 } 00:05:25.833 ], 00:05:25.833 "driver_specific": {} 00:05:25.833 } 00:05:25.833 ]' 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.833 [2024-07-15 20:00:23.114512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:25.833 [2024-07-15 20:00:23.114540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:25.833 [2024-07-15 20:00:23.114553] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6b6a90 00:05:25.833 [2024-07-15 20:00:23.114559] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:25.833 [2024-07-15 20:00:23.115763] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:25.833 [2024-07-15 20:00:23.115781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:25.833 Passthru0 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:25.833 { 00:05:25.833 "name": "Malloc2", 00:05:25.833 "aliases": [ 00:05:25.833 "82cd90dd-65d6-4501-ab1b-97eae62e27f6" 00:05:25.833 ], 00:05:25.833 "product_name": "Malloc disk", 00:05:25.833 "block_size": 512, 00:05:25.833 "num_blocks": 16384, 00:05:25.833 "uuid": "82cd90dd-65d6-4501-ab1b-97eae62e27f6", 00:05:25.833 "assigned_rate_limits": { 00:05:25.833 "rw_ios_per_sec": 0, 00:05:25.833 "rw_mbytes_per_sec": 0, 00:05:25.833 "r_mbytes_per_sec": 0, 00:05:25.833 "w_mbytes_per_sec": 0 00:05:25.833 }, 00:05:25.833 "claimed": true, 00:05:25.833 "claim_type": "exclusive_write", 00:05:25.833 "zoned": false, 00:05:25.833 "supported_io_types": { 00:05:25.833 "read": true, 00:05:25.833 "write": true, 00:05:25.833 "unmap": true, 00:05:25.833 "flush": true, 00:05:25.833 "reset": true, 00:05:25.833 "nvme_admin": false, 00:05:25.833 "nvme_io": false, 00:05:25.833 "nvme_io_md": false, 00:05:25.833 "write_zeroes": true, 00:05:25.833 "zcopy": true, 00:05:25.833 "get_zone_info": false, 00:05:25.833 "zone_management": false, 00:05:25.833 "zone_append": false, 00:05:25.833 "compare": false, 00:05:25.833 "compare_and_write": false, 00:05:25.833 "abort": true, 00:05:25.833 "seek_hole": false, 00:05:25.833 "seek_data": false, 00:05:25.833 "copy": true, 00:05:25.833 "nvme_iov_md": false 00:05:25.833 }, 00:05:25.833 "memory_domains": [ 00:05:25.833 { 00:05:25.833 "dma_device_id": "system", 00:05:25.833 "dma_device_type": 1 00:05:25.833 }, 00:05:25.833 { 00:05:25.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.833 "dma_device_type": 2 00:05:25.833 } 00:05:25.833 ], 00:05:25.833 "driver_specific": {} 00:05:25.833 }, 00:05:25.833 { 00:05:25.833 "name": "Passthru0", 00:05:25.833 "aliases": [ 00:05:25.833 "c830717b-a93a-5073-82f5-7f44bd9d9a0b" 00:05:25.833 ], 00:05:25.833 "product_name": "passthru", 00:05:25.833 "block_size": 512, 00:05:25.833 "num_blocks": 16384, 00:05:25.833 "uuid": "c830717b-a93a-5073-82f5-7f44bd9d9a0b", 00:05:25.833 "assigned_rate_limits": { 00:05:25.833 "rw_ios_per_sec": 0, 00:05:25.833 "rw_mbytes_per_sec": 0, 00:05:25.833 "r_mbytes_per_sec": 0, 00:05:25.833 "w_mbytes_per_sec": 0 00:05:25.833 }, 00:05:25.833 "claimed": false, 00:05:25.833 "zoned": false, 00:05:25.833 "supported_io_types": { 00:05:25.833 "read": true, 00:05:25.833 "write": true, 00:05:25.833 "unmap": true, 00:05:25.833 "flush": true, 00:05:25.833 "reset": true, 00:05:25.833 "nvme_admin": false, 00:05:25.833 "nvme_io": false, 00:05:25.833 "nvme_io_md": false, 00:05:25.833 "write_zeroes": true, 00:05:25.833 "zcopy": true, 00:05:25.833 "get_zone_info": false, 00:05:25.833 "zone_management": false, 00:05:25.833 "zone_append": false, 00:05:25.833 "compare": false, 00:05:25.833 "compare_and_write": false, 00:05:25.833 "abort": true, 00:05:25.833 "seek_hole": false, 00:05:25.833 "seek_data": false, 00:05:25.833 "copy": true, 00:05:25.833 "nvme_iov_md": false 00:05:25.833 }, 00:05:25.833 "memory_domains": [ 00:05:25.833 { 00:05:25.833 "dma_device_id": "system", 00:05:25.833 "dma_device_type": 1 00:05:25.833 }, 00:05:25.833 { 00:05:25.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.833 "dma_device_type": 2 00:05:25.833 } 00:05:25.833 ], 00:05:25.833 "driver_specific": { 00:05:25.833 "passthru": { 00:05:25.833 "name": "Passthru0", 00:05:25.833 "base_bdev_name": "Malloc2" 00:05:25.833 } 00:05:25.833 } 00:05:25.833 } 00:05:25.833 ]' 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:25.833 00:05:25.833 real 0m0.289s 00:05:25.833 user 0m0.178s 00:05:25.833 sys 0m0.044s 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.833 20:00:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.833 ************************************ 00:05:25.833 END TEST rpc_daemon_integrity 00:05:25.833 ************************************ 00:05:26.094 20:00:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:26.094 20:00:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:26.094 20:00:23 rpc -- rpc/rpc.sh@84 -- # killprocess 769381 00:05:26.094 20:00:23 rpc -- common/autotest_common.sh@948 -- # '[' -z 769381 ']' 00:05:26.094 20:00:23 rpc -- common/autotest_common.sh@952 -- # kill -0 769381 00:05:26.094 20:00:23 rpc -- common/autotest_common.sh@953 -- # uname 00:05:26.094 20:00:23 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.094 20:00:23 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 769381 00:05:26.094 20:00:23 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.094 20:00:23 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.094 20:00:23 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 769381' 00:05:26.094 killing process with pid 769381 00:05:26.094 20:00:23 rpc -- common/autotest_common.sh@967 -- # kill 769381 00:05:26.094 20:00:23 rpc -- common/autotest_common.sh@972 -- # wait 769381 00:05:26.355 00:05:26.355 real 0m2.447s 00:05:26.355 user 0m3.175s 00:05:26.355 sys 0m0.724s 00:05:26.355 20:00:23 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.355 20:00:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.355 ************************************ 00:05:26.355 END TEST rpc 00:05:26.355 ************************************ 00:05:26.355 20:00:23 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.355 20:00:23 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:26.355 20:00:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.355 20:00:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.355 20:00:23 -- common/autotest_common.sh@10 -- # set +x 00:05:26.355 ************************************ 00:05:26.355 START TEST skip_rpc 00:05:26.355 ************************************ 00:05:26.355 20:00:23 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:26.355 * Looking for test storage... 00:05:26.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:26.355 20:00:23 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:26.355 20:00:23 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:26.355 20:00:23 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:26.355 20:00:23 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.355 20:00:23 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.355 20:00:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.355 ************************************ 00:05:26.355 START TEST skip_rpc 00:05:26.355 ************************************ 00:05:26.355 20:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:26.355 20:00:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=769942 00:05:26.355 20:00:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.355 20:00:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:26.355 20:00:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:26.616 [2024-07-15 20:00:23.835660] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:05:26.616 [2024-07-15 20:00:23.835721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769942 ] 00:05:26.616 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.616 [2024-07-15 20:00:23.898073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.616 [2024-07-15 20:00:23.973292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.909 20:00:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:31.909 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:31.909 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:31.909 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:31.909 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.909 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:31.909 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.909 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:31.909 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.909 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.909 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:31.909 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:31.909 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.909 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:31.910 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.910 20:00:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:31.910 20:00:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 769942 00:05:31.910 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 769942 ']' 00:05:31.910 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 769942 00:05:31.910 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:31.910 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.910 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 769942 00:05:31.910 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.910 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.910 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 769942' 00:05:31.910 killing process with pid 769942 00:05:31.910 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 769942 00:05:31.910 20:00:28 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 769942 00:05:31.910 00:05:31.910 real 0m5.277s 00:05:31.910 user 0m5.083s 00:05:31.910 sys 0m0.224s 00:05:31.910 20:00:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.910 20:00:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.910 ************************************ 00:05:31.910 END TEST skip_rpc 00:05:31.910 ************************************ 00:05:31.910 20:00:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:31.910 20:00:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:31.910 20:00:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.910 20:00:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.910 20:00:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.910 ************************************ 00:05:31.910 START TEST skip_rpc_with_json 00:05:31.910 ************************************ 00:05:31.910 20:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:31.910 20:00:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:31.910 20:00:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=771131 00:05:31.910 20:00:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.910 20:00:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 771131 00:05:31.910 20:00:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.910 20:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 771131 ']' 00:05:31.910 20:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.910 20:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.910 20:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.910 20:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.910 20:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.910 [2024-07-15 20:00:29.190550] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:05:31.910 [2024-07-15 20:00:29.190604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771131 ] 00:05:31.910 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.910 [2024-07-15 20:00:29.251816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.910 [2024-07-15 20:00:29.324695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.851 20:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.851 20:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:32.851 20:00:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:32.851 20:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.851 20:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.851 [2024-07-15 20:00:29.970016] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:32.851 request: 00:05:32.851 { 00:05:32.851 "trtype": "tcp", 00:05:32.851 "method": "nvmf_get_transports", 00:05:32.851 "req_id": 1 00:05:32.851 } 00:05:32.851 Got JSON-RPC error response 00:05:32.851 response: 00:05:32.851 { 00:05:32.851 "code": -19, 00:05:32.851 "message": "No such device" 00:05:32.851 } 00:05:32.851 20:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:32.851 20:00:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:32.851 20:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.851 20:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.851 [2024-07-15 20:00:29.982146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.851 20:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.851 20:00:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:32.851 20:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.851 20:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.851 20:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.851 20:00:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:32.851 { 00:05:32.851 "subsystems": [ 00:05:32.851 { 00:05:32.851 "subsystem": "vfio_user_target", 00:05:32.851 "config": null 00:05:32.851 }, 00:05:32.851 { 00:05:32.851 "subsystem": "keyring", 00:05:32.851 "config": [] 00:05:32.851 }, 00:05:32.851 { 00:05:32.851 "subsystem": "iobuf", 00:05:32.851 "config": [ 00:05:32.851 { 00:05:32.851 "method": "iobuf_set_options", 00:05:32.851 "params": { 00:05:32.851 "small_pool_count": 8192, 00:05:32.851 "large_pool_count": 1024, 00:05:32.851 "small_bufsize": 8192, 00:05:32.851 "large_bufsize": 135168 00:05:32.851 } 00:05:32.851 } 00:05:32.851 ] 00:05:32.851 }, 00:05:32.851 { 00:05:32.851 "subsystem": "sock", 00:05:32.851 "config": [ 00:05:32.851 { 00:05:32.851 "method": "sock_set_default_impl", 00:05:32.851 "params": { 00:05:32.851 "impl_name": "posix" 00:05:32.851 } 00:05:32.851 }, 00:05:32.851 { 00:05:32.851 "method": "sock_impl_set_options", 00:05:32.851 "params": { 00:05:32.851 "impl_name": "ssl", 00:05:32.851 "recv_buf_size": 4096, 00:05:32.851 "send_buf_size": 4096, 00:05:32.851 "enable_recv_pipe": true, 00:05:32.851 "enable_quickack": false, 00:05:32.851 "enable_placement_id": 0, 00:05:32.851 "enable_zerocopy_send_server": true, 00:05:32.851 "enable_zerocopy_send_client": false, 00:05:32.851 "zerocopy_threshold": 0, 00:05:32.851 "tls_version": 0, 00:05:32.851 "enable_ktls": false 00:05:32.851 } 00:05:32.851 }, 00:05:32.851 { 00:05:32.851 "method": "sock_impl_set_options", 00:05:32.851 "params": { 00:05:32.851 "impl_name": "posix", 00:05:32.851 "recv_buf_size": 2097152, 00:05:32.851 "send_buf_size": 2097152, 00:05:32.851 "enable_recv_pipe": true, 00:05:32.851 "enable_quickack": false, 00:05:32.851 "enable_placement_id": 0, 00:05:32.851 "enable_zerocopy_send_server": true, 00:05:32.851 "enable_zerocopy_send_client": false, 00:05:32.851 "zerocopy_threshold": 0, 00:05:32.851 "tls_version": 0, 00:05:32.851 "enable_ktls": false 00:05:32.851 } 00:05:32.851 } 00:05:32.851 ] 00:05:32.851 }, 00:05:32.851 { 00:05:32.851 "subsystem": "vmd", 00:05:32.851 "config": [] 00:05:32.851 }, 00:05:32.851 { 00:05:32.851 "subsystem": "accel", 00:05:32.851 "config": [ 00:05:32.851 { 00:05:32.851 "method": "accel_set_options", 00:05:32.851 "params": { 00:05:32.851 "small_cache_size": 128, 00:05:32.851 "large_cache_size": 16, 00:05:32.851 "task_count": 2048, 00:05:32.851 "sequence_count": 2048, 00:05:32.851 "buf_count": 2048 00:05:32.851 } 00:05:32.851 } 00:05:32.851 ] 00:05:32.851 }, 00:05:32.851 { 00:05:32.851 "subsystem": "bdev", 00:05:32.851 "config": [ 00:05:32.851 { 00:05:32.851 "method": "bdev_set_options", 00:05:32.851 "params": { 00:05:32.851 "bdev_io_pool_size": 65535, 00:05:32.851 "bdev_io_cache_size": 256, 00:05:32.851 "bdev_auto_examine": true, 00:05:32.851 "iobuf_small_cache_size": 128, 00:05:32.851 "iobuf_large_cache_size": 16 00:05:32.851 } 00:05:32.851 }, 00:05:32.851 { 00:05:32.851 "method": "bdev_raid_set_options", 00:05:32.851 "params": { 00:05:32.851 "process_window_size_kb": 1024 00:05:32.851 } 00:05:32.851 }, 00:05:32.851 { 00:05:32.851 "method": "bdev_iscsi_set_options", 00:05:32.851 "params": { 00:05:32.851 "timeout_sec": 30 00:05:32.851 } 00:05:32.851 }, 00:05:32.851 { 00:05:32.851 "method": "bdev_nvme_set_options", 00:05:32.851 "params": { 00:05:32.851 "action_on_timeout": "none", 00:05:32.851 "timeout_us": 0, 00:05:32.851 "timeout_admin_us": 0, 00:05:32.851 "keep_alive_timeout_ms": 10000, 00:05:32.851 "arbitration_burst": 0, 00:05:32.851 "low_priority_weight": 0, 00:05:32.851 "medium_priority_weight": 0, 00:05:32.851 "high_priority_weight": 0, 00:05:32.851 "nvme_adminq_poll_period_us": 10000, 00:05:32.851 "nvme_ioq_poll_period_us": 0, 00:05:32.851 "io_queue_requests": 0, 00:05:32.851 "delay_cmd_submit": true, 00:05:32.851 "transport_retry_count": 4, 00:05:32.851 "bdev_retry_count": 3, 00:05:32.851 "transport_ack_timeout": 0, 00:05:32.851 "ctrlr_loss_timeout_sec": 0, 00:05:32.851 "reconnect_delay_sec": 0, 00:05:32.851 "fast_io_fail_timeout_sec": 0, 00:05:32.851 "disable_auto_failback": false, 00:05:32.851 "generate_uuids": false, 00:05:32.851 "transport_tos": 0, 00:05:32.851 "nvme_error_stat": false, 00:05:32.851 "rdma_srq_size": 0, 00:05:32.851 "io_path_stat": false, 00:05:32.851 "allow_accel_sequence": false, 00:05:32.852 "rdma_max_cq_size": 0, 00:05:32.852 "rdma_cm_event_timeout_ms": 0, 00:05:32.852 "dhchap_digests": [ 00:05:32.852 "sha256", 00:05:32.852 "sha384", 00:05:32.852 "sha512" 00:05:32.852 ], 00:05:32.852 "dhchap_dhgroups": [ 00:05:32.852 "null", 00:05:32.852 "ffdhe2048", 00:05:32.852 "ffdhe3072", 00:05:32.852 "ffdhe4096", 00:05:32.852 "ffdhe6144", 00:05:32.852 "ffdhe8192" 00:05:32.852 ] 00:05:32.852 } 00:05:32.852 }, 00:05:32.852 { 00:05:32.852 "method": "bdev_nvme_set_hotplug", 00:05:32.852 "params": { 00:05:32.852 "period_us": 100000, 00:05:32.852 "enable": false 00:05:32.852 } 00:05:32.852 }, 00:05:32.852 { 00:05:32.852 "method": "bdev_wait_for_examine" 00:05:32.852 } 00:05:32.852 ] 00:05:32.852 }, 00:05:32.852 { 00:05:32.852 "subsystem": "scsi", 00:05:32.852 "config": null 00:05:32.852 }, 00:05:32.852 { 00:05:32.852 "subsystem": "scheduler", 00:05:32.852 "config": [ 00:05:32.852 { 00:05:32.852 "method": "framework_set_scheduler", 00:05:32.852 "params": { 00:05:32.852 "name": "static" 00:05:32.852 } 00:05:32.852 } 00:05:32.852 ] 00:05:32.852 }, 00:05:32.852 { 00:05:32.852 "subsystem": "vhost_scsi", 00:05:32.852 "config": [] 00:05:32.852 }, 00:05:32.852 { 00:05:32.852 "subsystem": "vhost_blk", 00:05:32.852 "config": [] 00:05:32.852 }, 00:05:32.852 { 00:05:32.852 "subsystem": "ublk", 00:05:32.852 "config": [] 00:05:32.852 }, 00:05:32.852 { 00:05:32.852 "subsystem": "nbd", 00:05:32.852 "config": [] 00:05:32.852 }, 00:05:32.852 { 00:05:32.852 "subsystem": "nvmf", 00:05:32.852 "config": [ 00:05:32.852 { 00:05:32.852 "method": "nvmf_set_config", 00:05:32.852 "params": { 00:05:32.852 "discovery_filter": "match_any", 00:05:32.852 "admin_cmd_passthru": { 00:05:32.852 "identify_ctrlr": false 00:05:32.852 } 00:05:32.852 } 00:05:32.852 }, 00:05:32.852 { 00:05:32.852 "method": "nvmf_set_max_subsystems", 00:05:32.852 "params": { 00:05:32.852 "max_subsystems": 1024 00:05:32.852 } 00:05:32.852 }, 00:05:32.852 { 00:05:32.852 "method": "nvmf_set_crdt", 00:05:32.852 "params": { 00:05:32.852 "crdt1": 0, 00:05:32.852 "crdt2": 0, 00:05:32.852 "crdt3": 0 00:05:32.852 } 00:05:32.852 }, 00:05:32.852 { 00:05:32.852 "method": "nvmf_create_transport", 00:05:32.852 "params": { 00:05:32.852 "trtype": "TCP", 00:05:32.852 "max_queue_depth": 128, 00:05:32.852 "max_io_qpairs_per_ctrlr": 127, 00:05:32.852 "in_capsule_data_size": 4096, 00:05:32.852 "max_io_size": 131072, 00:05:32.852 "io_unit_size": 131072, 00:05:32.852 "max_aq_depth": 128, 00:05:32.852 "num_shared_buffers": 511, 00:05:32.852 "buf_cache_size": 4294967295, 00:05:32.852 "dif_insert_or_strip": false, 00:05:32.852 "zcopy": false, 00:05:32.852 "c2h_success": true, 00:05:32.852 "sock_priority": 0, 00:05:32.852 "abort_timeout_sec": 1, 00:05:32.852 "ack_timeout": 0, 00:05:32.852 "data_wr_pool_size": 0 00:05:32.852 } 00:05:32.852 } 00:05:32.852 ] 00:05:32.852 }, 00:05:32.852 { 00:05:32.852 "subsystem": "iscsi", 00:05:32.852 "config": [ 00:05:32.852 { 00:05:32.852 "method": "iscsi_set_options", 00:05:32.852 "params": { 00:05:32.852 "node_base": "iqn.2016-06.io.spdk", 00:05:32.852 "max_sessions": 128, 00:05:32.852 "max_connections_per_session": 2, 00:05:32.852 "max_queue_depth": 64, 00:05:32.852 "default_time2wait": 2, 00:05:32.852 "default_time2retain": 20, 00:05:32.852 "first_burst_length": 8192, 00:05:32.852 "immediate_data": true, 00:05:32.852 "allow_duplicated_isid": false, 00:05:32.852 "error_recovery_level": 0, 00:05:32.852 "nop_timeout": 60, 00:05:32.852 "nop_in_interval": 30, 00:05:32.852 "disable_chap": false, 00:05:32.852 "require_chap": false, 00:05:32.852 "mutual_chap": false, 00:05:32.852 "chap_group": 0, 00:05:32.852 "max_large_datain_per_connection": 64, 00:05:32.852 "max_r2t_per_connection": 4, 00:05:32.852 "pdu_pool_size": 36864, 00:05:32.852 "immediate_data_pool_size": 16384, 00:05:32.852 "data_out_pool_size": 2048 00:05:32.852 } 00:05:32.852 } 00:05:32.852 ] 00:05:32.852 } 00:05:32.852 ] 00:05:32.852 } 00:05:32.852 20:00:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:32.852 20:00:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 771131 00:05:32.852 20:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 771131 ']' 00:05:32.852 20:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 771131 00:05:32.852 20:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:32.852 20:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.852 20:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 771131 00:05:32.852 20:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.852 20:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.852 20:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 771131' 00:05:32.852 killing process with pid 771131 00:05:32.852 20:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 771131 00:05:32.852 20:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 771131 00:05:33.113 20:00:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=771288 00:05:33.113 20:00:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:33.113 20:00:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:38.405 20:00:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 771288 00:05:38.405 20:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 771288 ']' 00:05:38.405 20:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 771288 00:05:38.405 20:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:38.405 20:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.405 20:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 771288 00:05:38.405 20:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.405 20:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.405 20:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 771288' 00:05:38.405 killing process with pid 771288 00:05:38.405 20:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 771288 00:05:38.405 20:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 771288 00:05:38.405 20:00:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:38.405 20:00:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:38.405 00:05:38.405 real 0m6.556s 00:05:38.405 user 0m6.464s 00:05:38.405 sys 0m0.508s 00:05:38.405 20:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.405 20:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.405 ************************************ 00:05:38.406 END TEST skip_rpc_with_json 00:05:38.406 ************************************ 00:05:38.406 20:00:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:38.406 20:00:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:38.406 20:00:35 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.406 20:00:35 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.406 20:00:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.406 ************************************ 00:05:38.406 START TEST skip_rpc_with_delay 00:05:38.406 ************************************ 00:05:38.406 20:00:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:38.406 20:00:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:38.406 20:00:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:38.406 20:00:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:38.406 20:00:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.406 20:00:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.406 20:00:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.406 20:00:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.406 20:00:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.406 20:00:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.406 20:00:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.406 20:00:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:38.406 20:00:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:38.406 [2024-07-15 20:00:35.829888] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:38.406 [2024-07-15 20:00:35.829984] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:38.667 20:00:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:38.667 20:00:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:38.667 20:00:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:38.667 20:00:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:38.667 00:05:38.667 real 0m0.075s 00:05:38.667 user 0m0.052s 00:05:38.667 sys 0m0.023s 00:05:38.667 20:00:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.667 20:00:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:38.667 ************************************ 00:05:38.667 END TEST skip_rpc_with_delay 00:05:38.667 ************************************ 00:05:38.667 20:00:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:38.667 20:00:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:38.667 20:00:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:38.667 20:00:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:38.667 20:00:35 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.667 20:00:35 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.667 20:00:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.667 ************************************ 00:05:38.667 START TEST exit_on_failed_rpc_init 00:05:38.667 ************************************ 00:05:38.667 20:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:38.667 20:00:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=772566 00:05:38.667 20:00:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 772566 00:05:38.667 20:00:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.667 20:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 772566 ']' 00:05:38.667 20:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.667 20:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.667 20:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.667 20:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.667 20:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:38.667 [2024-07-15 20:00:35.990164] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:05:38.667 [2024-07-15 20:00:35.990213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772566 ] 00:05:38.667 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.667 [2024-07-15 20:00:36.048699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.927 [2024-07-15 20:00:36.114593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.498 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.498 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:39.498 20:00:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.498 20:00:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.498 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:39.498 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.498 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.498 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.498 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.498 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.498 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.498 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.498 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.498 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:39.498 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.498 [2024-07-15 20:00:36.794252] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:05:39.498 [2024-07-15 20:00:36.794301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772677 ] 00:05:39.498 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.498 [2024-07-15 20:00:36.870383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.793 [2024-07-15 20:00:36.935236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.793 [2024-07-15 20:00:36.935296] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:39.793 [2024-07-15 20:00:36.935306] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:39.793 [2024-07-15 20:00:36.935312] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:39.793 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:39.793 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:39.793 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:39.793 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:39.793 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:39.793 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:39.793 20:00:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:39.793 20:00:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 772566 00:05:39.793 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 772566 ']' 00:05:39.793 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 772566 00:05:39.793 20:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:39.793 20:00:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.793 20:00:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 772566 00:05:39.793 20:00:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.793 20:00:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.793 20:00:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 772566' 00:05:39.793 killing process with pid 772566 00:05:39.793 20:00:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 772566 00:05:39.793 20:00:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 772566 00:05:40.053 00:05:40.053 real 0m1.335s 00:05:40.053 user 0m1.569s 00:05:40.053 sys 0m0.359s 00:05:40.053 20:00:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.053 20:00:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.053 ************************************ 00:05:40.053 END TEST exit_on_failed_rpc_init 00:05:40.053 ************************************ 00:05:40.053 20:00:37 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:40.053 20:00:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:40.053 00:05:40.053 real 0m13.662s 00:05:40.053 user 0m13.334s 00:05:40.053 sys 0m1.391s 00:05:40.053 20:00:37 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.053 20:00:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.053 ************************************ 00:05:40.053 END TEST skip_rpc 00:05:40.053 ************************************ 00:05:40.053 20:00:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:40.053 20:00:37 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:40.053 20:00:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.053 20:00:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.053 20:00:37 -- common/autotest_common.sh@10 -- # set +x 00:05:40.053 ************************************ 00:05:40.053 START TEST rpc_client 00:05:40.053 ************************************ 00:05:40.053 20:00:37 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:40.053 * Looking for test storage... 00:05:40.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:40.053 20:00:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:40.314 OK 00:05:40.314 20:00:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:40.314 00:05:40.314 real 0m0.132s 00:05:40.314 user 0m0.061s 00:05:40.314 sys 0m0.080s 00:05:40.314 20:00:37 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.314 20:00:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:40.314 ************************************ 00:05:40.314 END TEST rpc_client 00:05:40.314 ************************************ 00:05:40.314 20:00:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:40.314 20:00:37 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:40.314 20:00:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.314 20:00:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.314 20:00:37 -- common/autotest_common.sh@10 -- # set +x 00:05:40.314 ************************************ 00:05:40.314 START TEST json_config 00:05:40.314 ************************************ 00:05:40.314 20:00:37 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:40.314 20:00:37 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.314 20:00:37 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.314 20:00:37 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.314 20:00:37 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.314 20:00:37 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.314 20:00:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.314 20:00:37 json_config -- paths/export.sh@5 -- # export PATH 00:05:40.314 20:00:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@47 -- # : 0 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:40.314 20:00:37 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:40.314 INFO: JSON configuration test init 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:40.314 20:00:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:40.314 20:00:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:40.314 20:00:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:40.314 20:00:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.314 20:00:37 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:40.314 20:00:37 json_config -- json_config/common.sh@9 -- # local app=target 00:05:40.314 20:00:37 json_config -- json_config/common.sh@10 -- # shift 00:05:40.314 20:00:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:40.314 20:00:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:40.314 20:00:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:40.314 20:00:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.314 20:00:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.314 20:00:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=773046 00:05:40.314 20:00:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:40.314 Waiting for target to run... 00:05:40.314 20:00:37 json_config -- json_config/common.sh@25 -- # waitforlisten 773046 /var/tmp/spdk_tgt.sock 00:05:40.314 20:00:37 json_config -- common/autotest_common.sh@829 -- # '[' -z 773046 ']' 00:05:40.314 20:00:37 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:40.314 20:00:37 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.315 20:00:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:40.315 20:00:37 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:40.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:40.315 20:00:37 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.315 20:00:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.576 [2024-07-15 20:00:37.769522] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:05:40.576 [2024-07-15 20:00:37.769596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773046 ] 00:05:40.576 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.837 [2024-07-15 20:00:38.068396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.837 [2024-07-15 20:00:38.120233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.407 20:00:38 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.407 20:00:38 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:41.407 20:00:38 json_config -- json_config/common.sh@26 -- # echo '' 00:05:41.407 00:05:41.407 20:00:38 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:41.407 20:00:38 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:41.407 20:00:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:41.407 20:00:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.407 20:00:38 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:41.407 20:00:38 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:41.407 20:00:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.407 20:00:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.407 20:00:38 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:41.407 20:00:38 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:41.407 20:00:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:41.979 20:00:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:41.979 20:00:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:41.979 20:00:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:41.979 20:00:39 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.979 20:00:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:41.979 20:00:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:41.979 20:00:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:41.979 20:00:39 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:41.979 20:00:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:42.241 MallocForNvmf0 00:05:42.241 20:00:39 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:42.241 20:00:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:42.241 MallocForNvmf1 00:05:42.501 20:00:39 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:42.501 20:00:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:42.501 [2024-07-15 20:00:39.816380] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.501 20:00:39 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:42.502 20:00:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:42.761 20:00:40 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:42.761 20:00:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:42.761 20:00:40 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:42.761 20:00:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:43.021 20:00:40 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:43.021 20:00:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:43.284 [2024-07-15 20:00:40.466460] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:43.284 20:00:40 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:43.284 20:00:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.284 20:00:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.284 20:00:40 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:43.284 20:00:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.284 20:00:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.284 20:00:40 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:43.284 20:00:40 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:43.284 20:00:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:43.284 MallocBdevForConfigChangeCheck 00:05:43.545 20:00:40 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:43.545 20:00:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.545 20:00:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.545 20:00:40 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:43.545 20:00:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:43.805 20:00:41 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:43.805 INFO: shutting down applications... 00:05:43.805 20:00:41 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:43.805 20:00:41 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:43.805 20:00:41 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:43.805 20:00:41 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:44.066 Calling clear_iscsi_subsystem 00:05:44.066 Calling clear_nvmf_subsystem 00:05:44.066 Calling clear_nbd_subsystem 00:05:44.066 Calling clear_ublk_subsystem 00:05:44.066 Calling clear_vhost_blk_subsystem 00:05:44.066 Calling clear_vhost_scsi_subsystem 00:05:44.066 Calling clear_bdev_subsystem 00:05:44.066 20:00:41 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:44.066 20:00:41 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:44.066 20:00:41 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:44.066 20:00:41 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:44.066 20:00:41 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:44.066 20:00:41 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:44.636 20:00:41 json_config -- json_config/json_config.sh@345 -- # break 00:05:44.636 20:00:41 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:44.636 20:00:41 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:44.636 20:00:41 json_config -- json_config/common.sh@31 -- # local app=target 00:05:44.636 20:00:41 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:44.636 20:00:41 json_config -- json_config/common.sh@35 -- # [[ -n 773046 ]] 00:05:44.636 20:00:41 json_config -- json_config/common.sh@38 -- # kill -SIGINT 773046 00:05:44.636 20:00:41 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:44.636 20:00:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.636 20:00:41 json_config -- json_config/common.sh@41 -- # kill -0 773046 00:05:44.636 20:00:41 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:44.895 20:00:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:44.895 20:00:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.895 20:00:42 json_config -- json_config/common.sh@41 -- # kill -0 773046 00:05:44.895 20:00:42 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:44.895 20:00:42 json_config -- json_config/common.sh@43 -- # break 00:05:44.895 20:00:42 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:44.895 20:00:42 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:44.895 SPDK target shutdown done 00:05:44.895 20:00:42 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:44.895 INFO: relaunching applications... 00:05:44.895 20:00:42 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.895 20:00:42 json_config -- json_config/common.sh@9 -- # local app=target 00:05:44.895 20:00:42 json_config -- json_config/common.sh@10 -- # shift 00:05:44.895 20:00:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:44.895 20:00:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:44.895 20:00:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:44.895 20:00:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.895 20:00:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.895 20:00:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=773955 00:05:44.895 20:00:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:44.895 Waiting for target to run... 00:05:44.895 20:00:42 json_config -- json_config/common.sh@25 -- # waitforlisten 773955 /var/tmp/spdk_tgt.sock 00:05:44.895 20:00:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.895 20:00:42 json_config -- common/autotest_common.sh@829 -- # '[' -z 773955 ']' 00:05:44.895 20:00:42 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:44.895 20:00:42 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.895 20:00:42 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:44.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:44.895 20:00:42 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.895 20:00:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.157 [2024-07-15 20:00:42.366887] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:05:45.157 [2024-07-15 20:00:42.366948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773955 ] 00:05:45.157 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.417 [2024-07-15 20:00:42.771309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.417 [2024-07-15 20:00:42.833699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.994 [2024-07-15 20:00:43.329629] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:45.994 [2024-07-15 20:00:43.361971] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:45.994 20:00:43 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.994 20:00:43 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:45.994 20:00:43 json_config -- json_config/common.sh@26 -- # echo '' 00:05:45.994 00:05:45.994 20:00:43 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:45.994 20:00:43 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:45.994 INFO: Checking if target configuration is the same... 00:05:45.994 20:00:43 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.994 20:00:43 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:45.994 20:00:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.994 + '[' 2 -ne 2 ']' 00:05:45.994 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:45.995 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:45.995 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:45.995 +++ basename /dev/fd/62 00:05:45.995 ++ mktemp /tmp/62.XXX 00:05:46.256 + tmp_file_1=/tmp/62.XzL 00:05:46.256 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.256 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:46.256 + tmp_file_2=/tmp/spdk_tgt_config.json.xxN 00:05:46.256 + ret=0 00:05:46.256 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.517 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.517 + diff -u /tmp/62.XzL /tmp/spdk_tgt_config.json.xxN 00:05:46.517 + echo 'INFO: JSON config files are the same' 00:05:46.517 INFO: JSON config files are the same 00:05:46.517 + rm /tmp/62.XzL /tmp/spdk_tgt_config.json.xxN 00:05:46.517 + exit 0 00:05:46.517 20:00:43 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:46.517 20:00:43 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:46.517 INFO: changing configuration and checking if this can be detected... 00:05:46.517 20:00:43 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:46.517 20:00:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:46.517 20:00:43 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:46.517 20:00:43 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.517 20:00:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:46.517 + '[' 2 -ne 2 ']' 00:05:46.517 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:46.517 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:46.517 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:46.517 +++ basename /dev/fd/62 00:05:46.517 ++ mktemp /tmp/62.XXX 00:05:46.517 + tmp_file_1=/tmp/62.qVS 00:05:46.786 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.786 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:46.786 + tmp_file_2=/tmp/spdk_tgt_config.json.h7c 00:05:46.786 + ret=0 00:05:46.786 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:47.049 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:47.049 + diff -u /tmp/62.qVS /tmp/spdk_tgt_config.json.h7c 00:05:47.049 + ret=1 00:05:47.049 + echo '=== Start of file: /tmp/62.qVS ===' 00:05:47.049 + cat /tmp/62.qVS 00:05:47.049 + echo '=== End of file: /tmp/62.qVS ===' 00:05:47.049 + echo '' 00:05:47.049 + echo '=== Start of file: /tmp/spdk_tgt_config.json.h7c ===' 00:05:47.049 + cat /tmp/spdk_tgt_config.json.h7c 00:05:47.049 + echo '=== End of file: /tmp/spdk_tgt_config.json.h7c ===' 00:05:47.049 + echo '' 00:05:47.049 + rm /tmp/62.qVS /tmp/spdk_tgt_config.json.h7c 00:05:47.049 + exit 1 00:05:47.049 20:00:44 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:47.049 INFO: configuration change detected. 00:05:47.049 20:00:44 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:47.049 20:00:44 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:47.049 20:00:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.049 20:00:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.049 20:00:44 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:47.049 20:00:44 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:47.049 20:00:44 json_config -- json_config/json_config.sh@317 -- # [[ -n 773955 ]] 00:05:47.049 20:00:44 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:47.049 20:00:44 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:47.049 20:00:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.049 20:00:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.049 20:00:44 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:47.049 20:00:44 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:47.049 20:00:44 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:47.049 20:00:44 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:47.049 20:00:44 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:47.049 20:00:44 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:47.049 20:00:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:47.049 20:00:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.049 20:00:44 json_config -- json_config/json_config.sh@323 -- # killprocess 773955 00:05:47.049 20:00:44 json_config -- common/autotest_common.sh@948 -- # '[' -z 773955 ']' 00:05:47.049 20:00:44 json_config -- common/autotest_common.sh@952 -- # kill -0 773955 00:05:47.049 20:00:44 json_config -- common/autotest_common.sh@953 -- # uname 00:05:47.049 20:00:44 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.049 20:00:44 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 773955 00:05:47.049 20:00:44 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.049 20:00:44 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.049 20:00:44 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 773955' 00:05:47.049 killing process with pid 773955 00:05:47.049 20:00:44 json_config -- common/autotest_common.sh@967 -- # kill 773955 00:05:47.049 20:00:44 json_config -- common/autotest_common.sh@972 -- # wait 773955 00:05:47.309 20:00:44 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:47.309 20:00:44 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:47.309 20:00:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:47.309 20:00:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.309 20:00:44 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:47.309 20:00:44 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:47.309 INFO: Success 00:05:47.309 00:05:47.309 real 0m7.150s 00:05:47.309 user 0m8.562s 00:05:47.309 sys 0m1.831s 00:05:47.309 20:00:44 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.309 20:00:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.309 ************************************ 00:05:47.309 END TEST json_config 00:05:47.309 ************************************ 00:05:47.571 20:00:44 -- common/autotest_common.sh@1142 -- # return 0 00:05:47.571 20:00:44 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:47.571 20:00:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.571 20:00:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.571 20:00:44 -- common/autotest_common.sh@10 -- # set +x 00:05:47.571 ************************************ 00:05:47.571 START TEST json_config_extra_key 00:05:47.571 ************************************ 00:05:47.571 20:00:44 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:47.571 20:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.571 20:00:44 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.571 20:00:44 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.571 20:00:44 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.571 20:00:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.571 20:00:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.571 20:00:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.571 20:00:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:47.571 20:00:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:47.571 20:00:44 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:47.571 20:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:47.571 20:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:47.571 20:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:47.571 20:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:47.571 20:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:47.571 20:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:47.571 20:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:47.571 20:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:47.571 20:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:47.571 20:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:47.571 20:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:47.571 INFO: launching applications... 00:05:47.571 20:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:47.571 20:00:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:47.571 20:00:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:47.571 20:00:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:47.571 20:00:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:47.571 20:00:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:47.571 20:00:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:47.571 20:00:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:47.571 20:00:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=774709 00:05:47.571 20:00:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:47.571 Waiting for target to run... 00:05:47.571 20:00:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 774709 /var/tmp/spdk_tgt.sock 00:05:47.571 20:00:44 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 774709 ']' 00:05:47.571 20:00:44 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:47.571 20:00:44 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:47.571 20:00:44 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.571 20:00:44 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:47.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:47.571 20:00:44 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.571 20:00:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:47.571 [2024-07-15 20:00:44.987168] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:05:47.571 [2024-07-15 20:00:44.987238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774709 ] 00:05:47.832 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.832 [2024-07-15 20:00:45.233773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.091 [2024-07-15 20:00:45.283848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.351 20:00:45 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.351 20:00:45 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:48.351 20:00:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:48.351 00:05:48.351 20:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:48.351 INFO: shutting down applications... 00:05:48.351 20:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:48.351 20:00:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:48.351 20:00:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:48.351 20:00:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 774709 ]] 00:05:48.351 20:00:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 774709 00:05:48.351 20:00:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:48.351 20:00:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.351 20:00:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 774709 00:05:48.351 20:00:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:48.920 20:00:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:48.920 20:00:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.920 20:00:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 774709 00:05:48.920 20:00:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:48.920 20:00:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:48.920 20:00:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:48.920 20:00:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:48.920 SPDK target shutdown done 00:05:48.920 20:00:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:48.920 Success 00:05:48.920 00:05:48.920 real 0m1.429s 00:05:48.920 user 0m1.084s 00:05:48.920 sys 0m0.361s 00:05:48.920 20:00:46 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.920 20:00:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:48.920 ************************************ 00:05:48.920 END TEST json_config_extra_key 00:05:48.920 ************************************ 00:05:48.920 20:00:46 -- common/autotest_common.sh@1142 -- # return 0 00:05:48.920 20:00:46 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.920 20:00:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.920 20:00:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.920 20:00:46 -- common/autotest_common.sh@10 -- # set +x 00:05:48.920 ************************************ 00:05:48.920 START TEST alias_rpc 00:05:48.920 ************************************ 00:05:48.920 20:00:46 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:49.180 * Looking for test storage... 00:05:49.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:49.180 20:00:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:49.180 20:00:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=775092 00:05:49.180 20:00:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 775092 00:05:49.180 20:00:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:49.180 20:00:46 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 775092 ']' 00:05:49.180 20:00:46 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.180 20:00:46 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.180 20:00:46 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.180 20:00:46 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.180 20:00:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.180 [2024-07-15 20:00:46.480820] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:05:49.180 [2024-07-15 20:00:46.480876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775092 ] 00:05:49.180 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.180 [2024-07-15 20:00:46.543056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.180 [2024-07-15 20:00:46.610697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.119 20:00:47 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.119 20:00:47 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:50.119 20:00:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:50.119 20:00:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 775092 00:05:50.119 20:00:47 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 775092 ']' 00:05:50.119 20:00:47 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 775092 00:05:50.119 20:00:47 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:50.119 20:00:47 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.119 20:00:47 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 775092 00:05:50.119 20:00:47 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.119 20:00:47 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.119 20:00:47 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 775092' 00:05:50.119 killing process with pid 775092 00:05:50.119 20:00:47 alias_rpc -- common/autotest_common.sh@967 -- # kill 775092 00:05:50.119 20:00:47 alias_rpc -- common/autotest_common.sh@972 -- # wait 775092 00:05:50.379 00:05:50.379 real 0m1.380s 00:05:50.379 user 0m1.544s 00:05:50.379 sys 0m0.352s 00:05:50.379 20:00:47 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.379 20:00:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.379 ************************************ 00:05:50.379 END TEST alias_rpc 00:05:50.379 ************************************ 00:05:50.379 20:00:47 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.379 20:00:47 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:50.379 20:00:47 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:50.379 20:00:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.379 20:00:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.379 20:00:47 -- common/autotest_common.sh@10 -- # set +x 00:05:50.379 ************************************ 00:05:50.379 START TEST spdkcli_tcp 00:05:50.379 ************************************ 00:05:50.379 20:00:47 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:50.639 * Looking for test storage... 00:05:50.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:50.639 20:00:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:50.639 20:00:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:50.639 20:00:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:50.639 20:00:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:50.639 20:00:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:50.639 20:00:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:50.639 20:00:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:50.639 20:00:47 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:50.639 20:00:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.639 20:00:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=775443 00:05:50.639 20:00:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 775443 00:05:50.639 20:00:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:50.639 20:00:47 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 775443 ']' 00:05:50.639 20:00:47 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.639 20:00:47 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.639 20:00:47 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.639 20:00:47 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.639 20:00:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.639 [2024-07-15 20:00:47.947422] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:05:50.639 [2024-07-15 20:00:47.947494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775443 ] 00:05:50.639 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.639 [2024-07-15 20:00:48.012070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.898 [2024-07-15 20:00:48.087797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.898 [2024-07-15 20:00:48.087801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.473 20:00:48 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.473 20:00:48 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:51.473 20:00:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:51.473 20:00:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=775496 00:05:51.473 20:00:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:51.473 [ 00:05:51.473 "bdev_malloc_delete", 00:05:51.473 "bdev_malloc_create", 00:05:51.473 "bdev_null_resize", 00:05:51.473 "bdev_null_delete", 00:05:51.473 "bdev_null_create", 00:05:51.473 "bdev_nvme_cuse_unregister", 00:05:51.473 "bdev_nvme_cuse_register", 00:05:51.473 "bdev_opal_new_user", 00:05:51.473 "bdev_opal_set_lock_state", 00:05:51.473 "bdev_opal_delete", 00:05:51.473 "bdev_opal_get_info", 00:05:51.473 "bdev_opal_create", 00:05:51.473 "bdev_nvme_opal_revert", 00:05:51.473 "bdev_nvme_opal_init", 00:05:51.473 "bdev_nvme_send_cmd", 00:05:51.473 "bdev_nvme_get_path_iostat", 00:05:51.473 "bdev_nvme_get_mdns_discovery_info", 00:05:51.473 "bdev_nvme_stop_mdns_discovery", 00:05:51.473 "bdev_nvme_start_mdns_discovery", 00:05:51.473 "bdev_nvme_set_multipath_policy", 00:05:51.473 "bdev_nvme_set_preferred_path", 00:05:51.473 "bdev_nvme_get_io_paths", 00:05:51.473 "bdev_nvme_remove_error_injection", 00:05:51.473 "bdev_nvme_add_error_injection", 00:05:51.473 "bdev_nvme_get_discovery_info", 00:05:51.473 "bdev_nvme_stop_discovery", 00:05:51.473 "bdev_nvme_start_discovery", 00:05:51.473 "bdev_nvme_get_controller_health_info", 00:05:51.473 "bdev_nvme_disable_controller", 00:05:51.473 "bdev_nvme_enable_controller", 00:05:51.473 "bdev_nvme_reset_controller", 00:05:51.473 "bdev_nvme_get_transport_statistics", 00:05:51.473 "bdev_nvme_apply_firmware", 00:05:51.473 "bdev_nvme_detach_controller", 00:05:51.473 "bdev_nvme_get_controllers", 00:05:51.473 "bdev_nvme_attach_controller", 00:05:51.473 "bdev_nvme_set_hotplug", 00:05:51.473 "bdev_nvme_set_options", 00:05:51.473 "bdev_passthru_delete", 00:05:51.473 "bdev_passthru_create", 00:05:51.473 "bdev_lvol_set_parent_bdev", 00:05:51.473 "bdev_lvol_set_parent", 00:05:51.473 "bdev_lvol_check_shallow_copy", 00:05:51.473 "bdev_lvol_start_shallow_copy", 00:05:51.473 "bdev_lvol_grow_lvstore", 00:05:51.473 "bdev_lvol_get_lvols", 00:05:51.473 "bdev_lvol_get_lvstores", 00:05:51.473 "bdev_lvol_delete", 00:05:51.473 "bdev_lvol_set_read_only", 00:05:51.473 "bdev_lvol_resize", 00:05:51.473 "bdev_lvol_decouple_parent", 00:05:51.473 "bdev_lvol_inflate", 00:05:51.473 "bdev_lvol_rename", 00:05:51.473 "bdev_lvol_clone_bdev", 00:05:51.473 "bdev_lvol_clone", 00:05:51.473 "bdev_lvol_snapshot", 00:05:51.473 "bdev_lvol_create", 00:05:51.473 "bdev_lvol_delete_lvstore", 00:05:51.473 "bdev_lvol_rename_lvstore", 00:05:51.473 "bdev_lvol_create_lvstore", 00:05:51.473 "bdev_raid_set_options", 00:05:51.473 "bdev_raid_remove_base_bdev", 00:05:51.473 "bdev_raid_add_base_bdev", 00:05:51.473 "bdev_raid_delete", 00:05:51.473 "bdev_raid_create", 00:05:51.473 "bdev_raid_get_bdevs", 00:05:51.473 "bdev_error_inject_error", 00:05:51.473 "bdev_error_delete", 00:05:51.473 "bdev_error_create", 00:05:51.473 "bdev_split_delete", 00:05:51.473 "bdev_split_create", 00:05:51.473 "bdev_delay_delete", 00:05:51.473 "bdev_delay_create", 00:05:51.473 "bdev_delay_update_latency", 00:05:51.473 "bdev_zone_block_delete", 00:05:51.473 "bdev_zone_block_create", 00:05:51.473 "blobfs_create", 00:05:51.473 "blobfs_detect", 00:05:51.473 "blobfs_set_cache_size", 00:05:51.473 "bdev_aio_delete", 00:05:51.473 "bdev_aio_rescan", 00:05:51.473 "bdev_aio_create", 00:05:51.473 "bdev_ftl_set_property", 00:05:51.473 "bdev_ftl_get_properties", 00:05:51.473 "bdev_ftl_get_stats", 00:05:51.473 "bdev_ftl_unmap", 00:05:51.473 "bdev_ftl_unload", 00:05:51.473 "bdev_ftl_delete", 00:05:51.473 "bdev_ftl_load", 00:05:51.473 "bdev_ftl_create", 00:05:51.473 "bdev_virtio_attach_controller", 00:05:51.473 "bdev_virtio_scsi_get_devices", 00:05:51.473 "bdev_virtio_detach_controller", 00:05:51.473 "bdev_virtio_blk_set_hotplug", 00:05:51.473 "bdev_iscsi_delete", 00:05:51.473 "bdev_iscsi_create", 00:05:51.473 "bdev_iscsi_set_options", 00:05:51.473 "accel_error_inject_error", 00:05:51.473 "ioat_scan_accel_module", 00:05:51.473 "dsa_scan_accel_module", 00:05:51.473 "iaa_scan_accel_module", 00:05:51.473 "vfu_virtio_create_scsi_endpoint", 00:05:51.473 "vfu_virtio_scsi_remove_target", 00:05:51.473 "vfu_virtio_scsi_add_target", 00:05:51.473 "vfu_virtio_create_blk_endpoint", 00:05:51.473 "vfu_virtio_delete_endpoint", 00:05:51.473 "keyring_file_remove_key", 00:05:51.473 "keyring_file_add_key", 00:05:51.473 "keyring_linux_set_options", 00:05:51.473 "iscsi_get_histogram", 00:05:51.473 "iscsi_enable_histogram", 00:05:51.473 "iscsi_set_options", 00:05:51.473 "iscsi_get_auth_groups", 00:05:51.473 "iscsi_auth_group_remove_secret", 00:05:51.473 "iscsi_auth_group_add_secret", 00:05:51.473 "iscsi_delete_auth_group", 00:05:51.473 "iscsi_create_auth_group", 00:05:51.473 "iscsi_set_discovery_auth", 00:05:51.473 "iscsi_get_options", 00:05:51.473 "iscsi_target_node_request_logout", 00:05:51.473 "iscsi_target_node_set_redirect", 00:05:51.473 "iscsi_target_node_set_auth", 00:05:51.473 "iscsi_target_node_add_lun", 00:05:51.473 "iscsi_get_stats", 00:05:51.473 "iscsi_get_connections", 00:05:51.473 "iscsi_portal_group_set_auth", 00:05:51.473 "iscsi_start_portal_group", 00:05:51.473 "iscsi_delete_portal_group", 00:05:51.473 "iscsi_create_portal_group", 00:05:51.473 "iscsi_get_portal_groups", 00:05:51.473 "iscsi_delete_target_node", 00:05:51.473 "iscsi_target_node_remove_pg_ig_maps", 00:05:51.473 "iscsi_target_node_add_pg_ig_maps", 00:05:51.473 "iscsi_create_target_node", 00:05:51.473 "iscsi_get_target_nodes", 00:05:51.473 "iscsi_delete_initiator_group", 00:05:51.473 "iscsi_initiator_group_remove_initiators", 00:05:51.473 "iscsi_initiator_group_add_initiators", 00:05:51.473 "iscsi_create_initiator_group", 00:05:51.473 "iscsi_get_initiator_groups", 00:05:51.473 "nvmf_set_crdt", 00:05:51.473 "nvmf_set_config", 00:05:51.473 "nvmf_set_max_subsystems", 00:05:51.473 "nvmf_stop_mdns_prr", 00:05:51.473 "nvmf_publish_mdns_prr", 00:05:51.473 "nvmf_subsystem_get_listeners", 00:05:51.473 "nvmf_subsystem_get_qpairs", 00:05:51.473 "nvmf_subsystem_get_controllers", 00:05:51.473 "nvmf_get_stats", 00:05:51.473 "nvmf_get_transports", 00:05:51.473 "nvmf_create_transport", 00:05:51.473 "nvmf_get_targets", 00:05:51.473 "nvmf_delete_target", 00:05:51.473 "nvmf_create_target", 00:05:51.473 "nvmf_subsystem_allow_any_host", 00:05:51.473 "nvmf_subsystem_remove_host", 00:05:51.473 "nvmf_subsystem_add_host", 00:05:51.473 "nvmf_ns_remove_host", 00:05:51.473 "nvmf_ns_add_host", 00:05:51.473 "nvmf_subsystem_remove_ns", 00:05:51.473 "nvmf_subsystem_add_ns", 00:05:51.473 "nvmf_subsystem_listener_set_ana_state", 00:05:51.473 "nvmf_discovery_get_referrals", 00:05:51.473 "nvmf_discovery_remove_referral", 00:05:51.473 "nvmf_discovery_add_referral", 00:05:51.473 "nvmf_subsystem_remove_listener", 00:05:51.473 "nvmf_subsystem_add_listener", 00:05:51.473 "nvmf_delete_subsystem", 00:05:51.473 "nvmf_create_subsystem", 00:05:51.473 "nvmf_get_subsystems", 00:05:51.473 "env_dpdk_get_mem_stats", 00:05:51.473 "nbd_get_disks", 00:05:51.473 "nbd_stop_disk", 00:05:51.474 "nbd_start_disk", 00:05:51.474 "ublk_recover_disk", 00:05:51.474 "ublk_get_disks", 00:05:51.474 "ublk_stop_disk", 00:05:51.474 "ublk_start_disk", 00:05:51.474 "ublk_destroy_target", 00:05:51.474 "ublk_create_target", 00:05:51.474 "virtio_blk_create_transport", 00:05:51.474 "virtio_blk_get_transports", 00:05:51.474 "vhost_controller_set_coalescing", 00:05:51.474 "vhost_get_controllers", 00:05:51.474 "vhost_delete_controller", 00:05:51.474 "vhost_create_blk_controller", 00:05:51.474 "vhost_scsi_controller_remove_target", 00:05:51.474 "vhost_scsi_controller_add_target", 00:05:51.474 "vhost_start_scsi_controller", 00:05:51.474 "vhost_create_scsi_controller", 00:05:51.474 "thread_set_cpumask", 00:05:51.474 "framework_get_governor", 00:05:51.474 "framework_get_scheduler", 00:05:51.474 "framework_set_scheduler", 00:05:51.474 "framework_get_reactors", 00:05:51.474 "thread_get_io_channels", 00:05:51.474 "thread_get_pollers", 00:05:51.474 "thread_get_stats", 00:05:51.474 "framework_monitor_context_switch", 00:05:51.474 "spdk_kill_instance", 00:05:51.474 "log_enable_timestamps", 00:05:51.474 "log_get_flags", 00:05:51.474 "log_clear_flag", 00:05:51.474 "log_set_flag", 00:05:51.474 "log_get_level", 00:05:51.474 "log_set_level", 00:05:51.474 "log_get_print_level", 00:05:51.474 "log_set_print_level", 00:05:51.474 "framework_enable_cpumask_locks", 00:05:51.474 "framework_disable_cpumask_locks", 00:05:51.474 "framework_wait_init", 00:05:51.474 "framework_start_init", 00:05:51.474 "scsi_get_devices", 00:05:51.474 "bdev_get_histogram", 00:05:51.474 "bdev_enable_histogram", 00:05:51.474 "bdev_set_qos_limit", 00:05:51.474 "bdev_set_qd_sampling_period", 00:05:51.474 "bdev_get_bdevs", 00:05:51.474 "bdev_reset_iostat", 00:05:51.474 "bdev_get_iostat", 00:05:51.474 "bdev_examine", 00:05:51.474 "bdev_wait_for_examine", 00:05:51.474 "bdev_set_options", 00:05:51.474 "notify_get_notifications", 00:05:51.474 "notify_get_types", 00:05:51.474 "accel_get_stats", 00:05:51.474 "accel_set_options", 00:05:51.474 "accel_set_driver", 00:05:51.474 "accel_crypto_key_destroy", 00:05:51.474 "accel_crypto_keys_get", 00:05:51.474 "accel_crypto_key_create", 00:05:51.474 "accel_assign_opc", 00:05:51.474 "accel_get_module_info", 00:05:51.474 "accel_get_opc_assignments", 00:05:51.474 "vmd_rescan", 00:05:51.474 "vmd_remove_device", 00:05:51.474 "vmd_enable", 00:05:51.474 "sock_get_default_impl", 00:05:51.474 "sock_set_default_impl", 00:05:51.474 "sock_impl_set_options", 00:05:51.474 "sock_impl_get_options", 00:05:51.474 "iobuf_get_stats", 00:05:51.474 "iobuf_set_options", 00:05:51.474 "keyring_get_keys", 00:05:51.474 "framework_get_pci_devices", 00:05:51.474 "framework_get_config", 00:05:51.474 "framework_get_subsystems", 00:05:51.474 "vfu_tgt_set_base_path", 00:05:51.474 "trace_get_info", 00:05:51.474 "trace_get_tpoint_group_mask", 00:05:51.474 "trace_disable_tpoint_group", 00:05:51.474 "trace_enable_tpoint_group", 00:05:51.474 "trace_clear_tpoint_mask", 00:05:51.474 "trace_set_tpoint_mask", 00:05:51.474 "spdk_get_version", 00:05:51.474 "rpc_get_methods" 00:05:51.474 ] 00:05:51.474 20:00:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:51.474 20:00:48 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:51.474 20:00:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.734 20:00:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:51.734 20:00:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 775443 00:05:51.734 20:00:48 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 775443 ']' 00:05:51.734 20:00:48 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 775443 00:05:51.734 20:00:48 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:51.734 20:00:48 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.734 20:00:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 775443 00:05:51.734 20:00:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.734 20:00:48 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.734 20:00:48 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 775443' 00:05:51.734 killing process with pid 775443 00:05:51.734 20:00:48 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 775443 00:05:51.734 20:00:48 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 775443 00:05:51.995 00:05:51.995 real 0m1.407s 00:05:51.995 user 0m2.572s 00:05:51.995 sys 0m0.422s 00:05:51.995 20:00:49 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.995 20:00:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.995 ************************************ 00:05:51.995 END TEST spdkcli_tcp 00:05:51.995 ************************************ 00:05:51.995 20:00:49 -- common/autotest_common.sh@1142 -- # return 0 00:05:51.995 20:00:49 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:51.995 20:00:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.995 20:00:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.995 20:00:49 -- common/autotest_common.sh@10 -- # set +x 00:05:51.995 ************************************ 00:05:51.995 START TEST dpdk_mem_utility 00:05:51.995 ************************************ 00:05:51.995 20:00:49 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:51.995 * Looking for test storage... 00:05:51.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:51.995 20:00:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:51.995 20:00:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=775732 00:05:51.995 20:00:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 775732 00:05:51.995 20:00:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.995 20:00:49 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 775732 ']' 00:05:51.995 20:00:49 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.995 20:00:49 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.995 20:00:49 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.995 20:00:49 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.995 20:00:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.995 [2024-07-15 20:00:49.426708] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:05:51.995 [2024-07-15 20:00:49.426776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775732 ] 00:05:52.255 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.255 [2024-07-15 20:00:49.490384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.255 [2024-07-15 20:00:49.564645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.826 20:00:50 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.826 20:00:50 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:52.826 20:00:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:52.826 20:00:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:52.826 20:00:50 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.826 20:00:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:52.826 { 00:05:52.826 "filename": "/tmp/spdk_mem_dump.txt" 00:05:52.826 } 00:05:52.826 20:00:50 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.826 20:00:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:52.826 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:52.826 1 heaps totaling size 814.000000 MiB 00:05:52.826 size: 814.000000 MiB heap id: 0 00:05:52.826 end heaps---------- 00:05:52.826 8 mempools totaling size 598.116089 MiB 00:05:52.826 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:52.826 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:52.826 size: 84.521057 MiB name: bdev_io_775732 00:05:52.826 size: 51.011292 MiB name: evtpool_775732 00:05:52.826 size: 50.003479 MiB name: msgpool_775732 00:05:52.826 size: 21.763794 MiB name: PDU_Pool 00:05:52.826 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:52.826 size: 0.026123 MiB name: Session_Pool 00:05:52.826 end mempools------- 00:05:52.826 6 memzones totaling size 4.142822 MiB 00:05:52.826 size: 1.000366 MiB name: RG_ring_0_775732 00:05:52.826 size: 1.000366 MiB name: RG_ring_1_775732 00:05:52.826 size: 1.000366 MiB name: RG_ring_4_775732 00:05:52.826 size: 1.000366 MiB name: RG_ring_5_775732 00:05:52.826 size: 0.125366 MiB name: RG_ring_2_775732 00:05:52.826 size: 0.015991 MiB name: RG_ring_3_775732 00:05:52.826 end memzones------- 00:05:52.826 20:00:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:53.088 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:53.088 list of free elements. size: 12.519348 MiB 00:05:53.088 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:53.088 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:53.088 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:53.088 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:53.088 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:53.088 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:53.088 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:53.088 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:53.088 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:53.088 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:53.088 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:53.088 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:53.088 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:53.088 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:53.088 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:53.088 list of standard malloc elements. size: 199.218079 MiB 00:05:53.088 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:53.088 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:53.088 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:53.088 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:53.088 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:53.088 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:53.088 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:53.088 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:53.088 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:53.088 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:53.088 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:53.088 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:53.088 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:53.088 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:53.088 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:53.088 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:53.088 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:53.088 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:53.088 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:53.088 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:53.088 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:53.088 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:53.088 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:53.088 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:53.088 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:53.088 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:53.088 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:53.088 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:53.088 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:53.088 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:53.088 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:53.088 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:53.088 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:53.088 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:53.088 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:53.088 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:53.088 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:53.088 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:53.088 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:53.088 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:53.088 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:53.088 list of memzone associated elements. size: 602.262573 MiB 00:05:53.088 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:53.088 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:53.088 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:53.088 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:53.088 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:53.088 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_775732_0 00:05:53.088 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:53.088 associated memzone info: size: 48.002930 MiB name: MP_evtpool_775732_0 00:05:53.088 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:53.088 associated memzone info: size: 48.002930 MiB name: MP_msgpool_775732_0 00:05:53.088 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:53.088 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:53.088 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:53.088 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:53.088 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:53.088 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_775732 00:05:53.088 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:53.088 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_775732 00:05:53.088 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:53.088 associated memzone info: size: 1.007996 MiB name: MP_evtpool_775732 00:05:53.088 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:53.088 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:53.088 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:53.088 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:53.088 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:53.088 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:53.088 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:53.088 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:53.088 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:53.088 associated memzone info: size: 1.000366 MiB name: RG_ring_0_775732 00:05:53.088 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:53.088 associated memzone info: size: 1.000366 MiB name: RG_ring_1_775732 00:05:53.088 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:53.088 associated memzone info: size: 1.000366 MiB name: RG_ring_4_775732 00:05:53.088 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:53.088 associated memzone info: size: 1.000366 MiB name: RG_ring_5_775732 00:05:53.088 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:53.088 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_775732 00:05:53.088 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:53.088 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:53.088 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:53.088 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:53.088 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:53.088 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:53.088 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:53.088 associated memzone info: size: 0.125366 MiB name: RG_ring_2_775732 00:05:53.088 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:53.088 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:53.088 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:53.088 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:53.088 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:53.088 associated memzone info: size: 0.015991 MiB name: RG_ring_3_775732 00:05:53.088 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:53.088 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:53.088 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:53.088 associated memzone info: size: 0.000183 MiB name: MP_msgpool_775732 00:05:53.088 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:53.088 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_775732 00:05:53.088 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:53.088 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:53.088 20:00:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:53.088 20:00:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 775732 00:05:53.088 20:00:50 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 775732 ']' 00:05:53.088 20:00:50 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 775732 00:05:53.088 20:00:50 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:53.088 20:00:50 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.088 20:00:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 775732 00:05:53.088 20:00:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.088 20:00:50 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.088 20:00:50 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 775732' 00:05:53.088 killing process with pid 775732 00:05:53.088 20:00:50 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 775732 00:05:53.088 20:00:50 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 775732 00:05:53.349 00:05:53.349 real 0m1.304s 00:05:53.349 user 0m1.384s 00:05:53.349 sys 0m0.380s 00:05:53.349 20:00:50 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.349 20:00:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:53.349 ************************************ 00:05:53.349 END TEST dpdk_mem_utility 00:05:53.349 ************************************ 00:05:53.349 20:00:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:53.349 20:00:50 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:53.349 20:00:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.349 20:00:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.349 20:00:50 -- common/autotest_common.sh@10 -- # set +x 00:05:53.349 ************************************ 00:05:53.349 START TEST event 00:05:53.349 ************************************ 00:05:53.349 20:00:50 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:53.349 * Looking for test storage... 00:05:53.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:53.349 20:00:50 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:53.349 20:00:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:53.349 20:00:50 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.349 20:00:50 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:53.349 20:00:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.349 20:00:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.349 ************************************ 00:05:53.349 START TEST event_perf 00:05:53.349 ************************************ 00:05:53.349 20:00:50 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.610 Running I/O for 1 seconds...[2024-07-15 20:00:50.787085] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:05:53.610 [2024-07-15 20:00:50.787206] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775982 ] 00:05:53.610 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.610 [2024-07-15 20:00:50.856409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.610 [2024-07-15 20:00:50.933839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.610 [2024-07-15 20:00:50.933956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.610 [2024-07-15 20:00:50.934114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.610 Running I/O for 1 seconds...[2024-07-15 20:00:50.934114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.549 00:05:54.549 lcore 0: 178630 00:05:54.549 lcore 1: 178630 00:05:54.549 lcore 2: 178629 00:05:54.549 lcore 3: 178632 00:05:54.810 done. 00:05:54.810 00:05:54.810 real 0m1.223s 00:05:54.810 user 0m4.138s 00:05:54.810 sys 0m0.082s 00:05:54.810 20:00:51 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.810 20:00:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.810 ************************************ 00:05:54.810 END TEST event_perf 00:05:54.810 ************************************ 00:05:54.810 20:00:52 event -- common/autotest_common.sh@1142 -- # return 0 00:05:54.810 20:00:52 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:54.810 20:00:52 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:54.810 20:00:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.810 20:00:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.810 ************************************ 00:05:54.810 START TEST event_reactor 00:05:54.810 ************************************ 00:05:54.810 20:00:52 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:54.810 [2024-07-15 20:00:52.086756] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:05:54.810 [2024-07-15 20:00:52.086848] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776311 ] 00:05:54.810 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.810 [2024-07-15 20:00:52.151275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.810 [2024-07-15 20:00:52.212789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.194 test_start 00:05:56.194 oneshot 00:05:56.194 tick 100 00:05:56.194 tick 100 00:05:56.194 tick 250 00:05:56.194 tick 100 00:05:56.194 tick 100 00:05:56.194 tick 100 00:05:56.194 tick 250 00:05:56.194 tick 500 00:05:56.194 tick 100 00:05:56.194 tick 100 00:05:56.194 tick 250 00:05:56.194 tick 100 00:05:56.194 tick 100 00:05:56.194 test_end 00:05:56.194 00:05:56.194 real 0m1.201s 00:05:56.194 user 0m1.132s 00:05:56.194 sys 0m0.065s 00:05:56.194 20:00:53 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.194 20:00:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:56.194 ************************************ 00:05:56.194 END TEST event_reactor 00:05:56.194 ************************************ 00:05:56.194 20:00:53 event -- common/autotest_common.sh@1142 -- # return 0 00:05:56.194 20:00:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:56.194 20:00:53 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:56.194 20:00:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.194 20:00:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.194 ************************************ 00:05:56.194 START TEST event_reactor_perf 00:05:56.194 ************************************ 00:05:56.194 20:00:53 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:56.194 [2024-07-15 20:00:53.353333] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:05:56.194 [2024-07-15 20:00:53.353429] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776663 ] 00:05:56.194 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.194 [2024-07-15 20:00:53.416666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.194 [2024-07-15 20:00:53.479289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.164 test_start 00:05:57.164 test_end 00:05:57.164 Performance: 368411 events per second 00:05:57.164 00:05:57.164 real 0m1.201s 00:05:57.164 user 0m1.124s 00:05:57.164 sys 0m0.072s 00:05:57.164 20:00:54 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.164 20:00:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:57.164 ************************************ 00:05:57.164 END TEST event_reactor_perf 00:05:57.164 ************************************ 00:05:57.164 20:00:54 event -- common/autotest_common.sh@1142 -- # return 0 00:05:57.164 20:00:54 event -- event/event.sh@49 -- # uname -s 00:05:57.164 20:00:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:57.164 20:00:54 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:57.164 20:00:54 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.164 20:00:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.164 20:00:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.424 ************************************ 00:05:57.424 START TEST event_scheduler 00:05:57.424 ************************************ 00:05:57.424 20:00:54 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:57.424 * Looking for test storage... 00:05:57.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:57.424 20:00:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:57.424 20:00:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=777044 00:05:57.424 20:00:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.424 20:00:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 777044 00:05:57.424 20:00:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:57.424 20:00:54 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 777044 ']' 00:05:57.424 20:00:54 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.424 20:00:54 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.424 20:00:54 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.424 20:00:54 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.424 20:00:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.424 [2024-07-15 20:00:54.766569] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:05:57.424 [2024-07-15 20:00:54.766638] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777044 ] 00:05:57.424 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.424 [2024-07-15 20:00:54.821936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.684 [2024-07-15 20:00:54.888926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.684 [2024-07-15 20:00:54.889086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.684 [2024-07-15 20:00:54.889223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.684 [2024-07-15 20:00:54.889224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.257 20:00:55 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.257 20:00:55 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:58.257 20:00:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:58.257 20:00:55 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.257 20:00:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.257 [2024-07-15 20:00:55.555380] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:58.257 [2024-07-15 20:00:55.555394] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:58.257 [2024-07-15 20:00:55.555401] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:58.257 [2024-07-15 20:00:55.555405] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:58.257 [2024-07-15 20:00:55.555409] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:58.257 20:00:55 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.257 20:00:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:58.257 20:00:55 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.257 20:00:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.257 [2024-07-15 20:00:55.609784] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:58.257 20:00:55 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.257 20:00:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:58.257 20:00:55 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.257 20:00:55 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.257 20:00:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.257 ************************************ 00:05:58.257 START TEST scheduler_create_thread 00:05:58.257 ************************************ 00:05:58.257 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:58.257 20:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:58.257 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.257 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.257 2 00:05:58.257 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.257 20:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:58.257 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.257 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.257 3 00:05:58.257 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.257 20:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:58.257 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.257 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.257 4 00:05:58.257 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.257 20:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:58.257 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.257 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.517 5 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.517 6 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.517 7 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.517 8 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.517 9 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.517 20:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.776 10 00:05:58.776 20:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.776 20:00:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:58.777 20:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.777 20:00:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.167 20:00:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.167 20:00:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:00.167 20:00:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:00.167 20:00:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.167 20:00:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.109 20:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.109 20:00:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:01.109 20:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.109 20:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.680 20:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.680 20:00:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:01.680 20:00:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:01.680 20:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.680 20:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.619 20:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.619 00:06:02.619 real 0m4.225s 00:06:02.619 user 0m0.022s 00:06:02.619 sys 0m0.008s 00:06:02.619 20:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.619 20:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.619 ************************************ 00:06:02.619 END TEST scheduler_create_thread 00:06:02.619 ************************************ 00:06:02.619 20:00:59 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:02.619 20:00:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:02.619 20:00:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 777044 00:06:02.619 20:00:59 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 777044 ']' 00:06:02.619 20:00:59 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 777044 00:06:02.619 20:00:59 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:02.619 20:00:59 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.619 20:00:59 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 777044 00:06:02.619 20:00:59 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:02.619 20:00:59 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:02.619 20:00:59 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 777044' 00:06:02.619 killing process with pid 777044 00:06:02.619 20:00:59 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 777044 00:06:02.619 20:00:59 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 777044 00:06:02.878 [2024-07-15 20:01:00.151056] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:03.139 00:06:03.139 real 0m5.706s 00:06:03.139 user 0m12.741s 00:06:03.139 sys 0m0.366s 00:06:03.139 20:01:00 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.139 20:01:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.139 ************************************ 00:06:03.139 END TEST event_scheduler 00:06:03.139 ************************************ 00:06:03.139 20:01:00 event -- common/autotest_common.sh@1142 -- # return 0 00:06:03.139 20:01:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:03.139 20:01:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:03.139 20:01:00 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.139 20:01:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.139 20:01:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.139 ************************************ 00:06:03.139 START TEST app_repeat 00:06:03.139 ************************************ 00:06:03.139 20:01:00 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:03.139 20:01:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.139 20:01:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.139 20:01:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:03.139 20:01:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.139 20:01:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:03.139 20:01:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:03.139 20:01:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:03.139 20:01:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=778112 00:06:03.139 20:01:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.139 20:01:00 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:03.139 20:01:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 778112' 00:06:03.139 Process app_repeat pid: 778112 00:06:03.139 20:01:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:03.139 20:01:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:03.139 spdk_app_start Round 0 00:06:03.139 20:01:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 778112 /var/tmp/spdk-nbd.sock 00:06:03.139 20:01:00 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 778112 ']' 00:06:03.139 20:01:00 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.139 20:01:00 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.140 20:01:00 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.140 20:01:00 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.140 20:01:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.140 [2024-07-15 20:01:00.444158] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:03.140 [2024-07-15 20:01:00.444223] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778112 ] 00:06:03.140 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.140 [2024-07-15 20:01:00.504658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.140 [2024-07-15 20:01:00.570647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.140 [2024-07-15 20:01:00.570650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.077 20:01:01 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.077 20:01:01 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:04.077 20:01:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.077 Malloc0 00:06:04.077 20:01:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.077 Malloc1 00:06:04.335 20:01:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.335 /dev/nbd0 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.335 20:01:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:04.335 20:01:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:04.335 20:01:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:04.335 20:01:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:04.335 20:01:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:04.335 20:01:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:04.335 20:01:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:04.335 20:01:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:04.335 20:01:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.335 1+0 records in 00:06:04.335 1+0 records out 00:06:04.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275877 s, 14.8 MB/s 00:06:04.335 20:01:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.335 20:01:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:04.335 20:01:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.335 20:01:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:04.335 20:01:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.335 20:01:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.595 /dev/nbd1 00:06:04.595 20:01:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.595 20:01:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.595 20:01:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:04.595 20:01:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:04.595 20:01:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:04.595 20:01:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:04.595 20:01:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:04.595 20:01:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:04.595 20:01:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:04.595 20:01:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:04.595 20:01:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.595 1+0 records in 00:06:04.595 1+0 records out 00:06:04.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277912 s, 14.7 MB/s 00:06:04.595 20:01:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.595 20:01:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:04.595 20:01:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.595 20:01:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:04.595 20:01:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:04.595 20:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.595 20:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.595 20:01:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.595 20:01:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.595 20:01:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.855 { 00:06:04.855 "nbd_device": "/dev/nbd0", 00:06:04.855 "bdev_name": "Malloc0" 00:06:04.855 }, 00:06:04.855 { 00:06:04.855 "nbd_device": "/dev/nbd1", 00:06:04.855 "bdev_name": "Malloc1" 00:06:04.855 } 00:06:04.855 ]' 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.855 { 00:06:04.855 "nbd_device": "/dev/nbd0", 00:06:04.855 "bdev_name": "Malloc0" 00:06:04.855 }, 00:06:04.855 { 00:06:04.855 "nbd_device": "/dev/nbd1", 00:06:04.855 "bdev_name": "Malloc1" 00:06:04.855 } 00:06:04.855 ]' 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.855 /dev/nbd1' 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.855 /dev/nbd1' 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.855 256+0 records in 00:06:04.855 256+0 records out 00:06:04.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116803 s, 89.8 MB/s 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.855 256+0 records in 00:06:04.855 256+0 records out 00:06:04.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157042 s, 66.8 MB/s 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.855 256+0 records in 00:06:04.855 256+0 records out 00:06:04.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168596 s, 62.2 MB/s 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.855 20:01:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.116 20:01:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.377 20:01:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.377 20:01:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.377 20:01:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.377 20:01:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.377 20:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.377 20:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.377 20:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.377 20:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.377 20:01:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.377 20:01:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.377 20:01:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.377 20:01:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.377 20:01:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.637 20:01:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.637 [2024-07-15 20:01:03.037976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.897 [2024-07-15 20:01:03.101927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.897 [2024-07-15 20:01:03.101931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.897 [2024-07-15 20:01:03.133301] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.897 [2024-07-15 20:01:03.133334] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.193 20:01:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:09.193 20:01:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:09.193 spdk_app_start Round 1 00:06:09.193 20:01:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 778112 /var/tmp/spdk-nbd.sock 00:06:09.193 20:01:05 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 778112 ']' 00:06:09.193 20:01:05 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.193 20:01:05 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.193 20:01:05 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.193 20:01:05 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.193 20:01:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.193 20:01:06 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.193 20:01:06 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:09.193 20:01:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.193 Malloc0 00:06:09.193 20:01:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.193 Malloc1 00:06:09.193 20:01:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:09.193 /dev/nbd0 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.193 20:01:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:09.193 20:01:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:09.193 20:01:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:09.193 20:01:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:09.193 20:01:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:09.193 20:01:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:09.193 20:01:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:09.193 20:01:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:09.193 20:01:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.193 1+0 records in 00:06:09.193 1+0 records out 00:06:09.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217121 s, 18.9 MB/s 00:06:09.193 20:01:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.193 20:01:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:09.193 20:01:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.193 20:01:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:09.193 20:01:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.193 20:01:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.453 /dev/nbd1 00:06:09.453 20:01:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.453 20:01:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.453 20:01:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:09.453 20:01:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:09.453 20:01:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:09.453 20:01:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:09.453 20:01:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:09.453 20:01:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:09.453 20:01:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:09.453 20:01:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:09.453 20:01:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.453 1+0 records in 00:06:09.453 1+0 records out 00:06:09.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268126 s, 15.3 MB/s 00:06:09.453 20:01:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.453 20:01:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:09.453 20:01:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.453 20:01:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:09.453 20:01:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:09.453 20:01:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.453 20:01:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.453 20:01:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.453 20:01:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.453 20:01:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.713 { 00:06:09.713 "nbd_device": "/dev/nbd0", 00:06:09.713 "bdev_name": "Malloc0" 00:06:09.713 }, 00:06:09.713 { 00:06:09.713 "nbd_device": "/dev/nbd1", 00:06:09.713 "bdev_name": "Malloc1" 00:06:09.713 } 00:06:09.713 ]' 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.713 { 00:06:09.713 "nbd_device": "/dev/nbd0", 00:06:09.713 "bdev_name": "Malloc0" 00:06:09.713 }, 00:06:09.713 { 00:06:09.713 "nbd_device": "/dev/nbd1", 00:06:09.713 "bdev_name": "Malloc1" 00:06:09.713 } 00:06:09.713 ]' 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.713 /dev/nbd1' 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.713 /dev/nbd1' 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.713 256+0 records in 00:06:09.713 256+0 records out 00:06:09.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124705 s, 84.1 MB/s 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.713 256+0 records in 00:06:09.713 256+0 records out 00:06:09.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160885 s, 65.2 MB/s 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.713 20:01:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.713 256+0 records in 00:06:09.713 256+0 records out 00:06:09.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025 s, 41.9 MB/s 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.713 20:01:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.973 20:01:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.974 20:01:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.234 20:01:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.234 20:01:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.234 20:01:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.234 20:01:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.234 20:01:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.234 20:01:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.234 20:01:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.234 20:01:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.234 20:01:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.234 20:01:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.234 20:01:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.234 20:01:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.234 20:01:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.492 20:01:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:10.492 [2024-07-15 20:01:07.882637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.751 [2024-07-15 20:01:07.945732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.751 [2024-07-15 20:01:07.945736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.751 [2024-07-15 20:01:07.977912] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.751 [2024-07-15 20:01:07.977946] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.051 20:01:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.051 20:01:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:14.051 spdk_app_start Round 2 00:06:14.051 20:01:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 778112 /var/tmp/spdk-nbd.sock 00:06:14.051 20:01:10 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 778112 ']' 00:06:14.051 20:01:10 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.051 20:01:10 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.051 20:01:10 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.051 20:01:10 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.051 20:01:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.051 20:01:10 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.051 20:01:10 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:14.051 20:01:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.051 Malloc0 00:06:14.051 20:01:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.051 Malloc1 00:06:14.051 20:01:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.051 /dev/nbd0 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.051 20:01:11 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:14.051 20:01:11 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:14.051 20:01:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:14.051 20:01:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:14.051 20:01:11 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:14.051 20:01:11 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:14.051 20:01:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:14.051 20:01:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:14.051 20:01:11 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.051 1+0 records in 00:06:14.051 1+0 records out 00:06:14.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329112 s, 12.4 MB/s 00:06:14.051 20:01:11 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.051 20:01:11 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:14.051 20:01:11 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.051 20:01:11 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:14.051 20:01:11 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.051 20:01:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.312 /dev/nbd1 00:06:14.312 20:01:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.312 20:01:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.312 20:01:11 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:14.312 20:01:11 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:14.312 20:01:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:14.312 20:01:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:14.312 20:01:11 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:14.312 20:01:11 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:14.312 20:01:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:14.312 20:01:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:14.312 20:01:11 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.312 1+0 records in 00:06:14.312 1+0 records out 00:06:14.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299886 s, 13.7 MB/s 00:06:14.312 20:01:11 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.312 20:01:11 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:14.312 20:01:11 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.312 20:01:11 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:14.312 20:01:11 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:14.312 20:01:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.312 20:01:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.312 20:01:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.312 20:01:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.312 20:01:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.572 { 00:06:14.572 "nbd_device": "/dev/nbd0", 00:06:14.572 "bdev_name": "Malloc0" 00:06:14.572 }, 00:06:14.572 { 00:06:14.572 "nbd_device": "/dev/nbd1", 00:06:14.572 "bdev_name": "Malloc1" 00:06:14.572 } 00:06:14.572 ]' 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.572 { 00:06:14.572 "nbd_device": "/dev/nbd0", 00:06:14.572 "bdev_name": "Malloc0" 00:06:14.572 }, 00:06:14.572 { 00:06:14.572 "nbd_device": "/dev/nbd1", 00:06:14.572 "bdev_name": "Malloc1" 00:06:14.572 } 00:06:14.572 ]' 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.572 /dev/nbd1' 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.572 /dev/nbd1' 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.572 256+0 records in 00:06:14.572 256+0 records out 00:06:14.572 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115658 s, 90.7 MB/s 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.572 20:01:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.572 256+0 records in 00:06:14.572 256+0 records out 00:06:14.572 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165454 s, 63.4 MB/s 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.573 256+0 records in 00:06:14.573 256+0 records out 00:06:14.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167817 s, 62.5 MB/s 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.573 20:01:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.833 20:01:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.094 20:01:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.094 20:01:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.094 20:01:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.094 20:01:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.094 20:01:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.094 20:01:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.094 20:01:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.094 20:01:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:15.094 20:01:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.094 20:01:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.094 20:01:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.094 20:01:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.094 20:01:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.094 20:01:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.355 20:01:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.355 [2024-07-15 20:01:12.753835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.616 [2024-07-15 20:01:12.817433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.616 [2024-07-15 20:01:12.817436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.616 [2024-07-15 20:01:12.848842] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.616 [2024-07-15 20:01:12.848876] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.966 20:01:15 event.app_repeat -- event/event.sh@38 -- # waitforlisten 778112 /var/tmp/spdk-nbd.sock 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 778112 ']' 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:18.966 20:01:15 event.app_repeat -- event/event.sh@39 -- # killprocess 778112 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 778112 ']' 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 778112 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 778112 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 778112' 00:06:18.966 killing process with pid 778112 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@967 -- # kill 778112 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@972 -- # wait 778112 00:06:18.966 spdk_app_start is called in Round 0. 00:06:18.966 Shutdown signal received, stop current app iteration 00:06:18.966 Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 reinitialization... 00:06:18.966 spdk_app_start is called in Round 1. 00:06:18.966 Shutdown signal received, stop current app iteration 00:06:18.966 Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 reinitialization... 00:06:18.966 spdk_app_start is called in Round 2. 00:06:18.966 Shutdown signal received, stop current app iteration 00:06:18.966 Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 reinitialization... 00:06:18.966 spdk_app_start is called in Round 3. 00:06:18.966 Shutdown signal received, stop current app iteration 00:06:18.966 20:01:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:18.966 20:01:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:18.966 00:06:18.966 real 0m15.540s 00:06:18.966 user 0m33.452s 00:06:18.966 sys 0m2.116s 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.966 20:01:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.966 ************************************ 00:06:18.966 END TEST app_repeat 00:06:18.966 ************************************ 00:06:18.966 20:01:15 event -- common/autotest_common.sh@1142 -- # return 0 00:06:18.966 20:01:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:18.966 20:01:15 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:18.966 20:01:15 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.966 20:01:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.966 20:01:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.966 ************************************ 00:06:18.966 START TEST cpu_locks 00:06:18.966 ************************************ 00:06:18.966 20:01:16 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:18.966 * Looking for test storage... 00:06:18.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:18.966 20:01:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:18.966 20:01:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:18.966 20:01:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:18.966 20:01:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:18.966 20:01:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.966 20:01:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.966 20:01:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.966 ************************************ 00:06:18.966 START TEST default_locks 00:06:18.966 ************************************ 00:06:18.966 20:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:18.966 20:01:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=781431 00:06:18.966 20:01:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 781431 00:06:18.966 20:01:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.966 20:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 781431 ']' 00:06:18.966 20:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.966 20:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.966 20:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.967 20:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.967 20:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.967 [2024-07-15 20:01:16.225446] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:18.967 [2024-07-15 20:01:16.225518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781431 ] 00:06:18.967 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.967 [2024-07-15 20:01:16.291417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.967 [2024-07-15 20:01:16.369496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.908 20:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.908 20:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:19.908 20:01:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 781431 00:06:19.908 20:01:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 781431 00:06:19.908 20:01:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.908 lslocks: write error 00:06:19.908 20:01:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 781431 00:06:19.908 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 781431 ']' 00:06:19.908 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 781431 00:06:19.908 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:19.908 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.908 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 781431 00:06:19.908 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.908 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.908 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 781431' 00:06:19.908 killing process with pid 781431 00:06:19.908 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 781431 00:06:19.908 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 781431 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 781431 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 781431 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 781431 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 781431 ']' 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (781431) - No such process 00:06:20.170 ERROR: process (pid: 781431) is no longer running 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:20.170 00:06:20.170 real 0m1.207s 00:06:20.170 user 0m1.281s 00:06:20.170 sys 0m0.367s 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.170 20:01:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.170 ************************************ 00:06:20.170 END TEST default_locks 00:06:20.170 ************************************ 00:06:20.170 20:01:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:20.170 20:01:17 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:20.170 20:01:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.170 20:01:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.170 20:01:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.170 ************************************ 00:06:20.170 START TEST default_locks_via_rpc 00:06:20.170 ************************************ 00:06:20.170 20:01:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:20.170 20:01:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=781732 00:06:20.170 20:01:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 781732 00:06:20.170 20:01:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.170 20:01:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 781732 ']' 00:06:20.170 20:01:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.170 20:01:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.170 20:01:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.170 20:01:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.170 20:01:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.170 [2024-07-15 20:01:17.492189] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:20.170 [2024-07-15 20:01:17.492236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781732 ] 00:06:20.170 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.170 [2024-07-15 20:01:17.551938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.432 [2024-07-15 20:01:17.613918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.003 20:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.003 20:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:21.003 20:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:21.003 20:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.003 20:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.003 20:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.003 20:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:21.003 20:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:21.003 20:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:21.003 20:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:21.003 20:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:21.003 20:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.003 20:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.003 20:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.003 20:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 781732 00:06:21.003 20:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 781732 00:06:21.003 20:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.572 20:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 781732 00:06:21.572 20:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 781732 ']' 00:06:21.572 20:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 781732 00:06:21.572 20:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:21.572 20:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.572 20:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 781732 00:06:21.572 20:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.572 20:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.572 20:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 781732' 00:06:21.572 killing process with pid 781732 00:06:21.572 20:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 781732 00:06:21.572 20:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 781732 00:06:21.572 00:06:21.572 real 0m1.568s 00:06:21.572 user 0m1.653s 00:06:21.572 sys 0m0.515s 00:06:21.572 20:01:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.572 20:01:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.572 ************************************ 00:06:21.572 END TEST default_locks_via_rpc 00:06:21.572 ************************************ 00:06:21.833 20:01:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:21.833 20:01:19 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:21.833 20:01:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.833 20:01:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.833 20:01:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.833 ************************************ 00:06:21.833 START TEST non_locking_app_on_locked_coremask 00:06:21.833 ************************************ 00:06:21.833 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:21.833 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=782100 00:06:21.833 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 782100 /var/tmp/spdk.sock 00:06:21.833 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.833 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 782100 ']' 00:06:21.833 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.833 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.833 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.833 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.833 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.833 [2024-07-15 20:01:19.134325] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:21.833 [2024-07-15 20:01:19.134381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782100 ] 00:06:21.833 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.833 [2024-07-15 20:01:19.198475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.093 [2024-07-15 20:01:19.273305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.667 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.667 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:22.667 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=782344 00:06:22.667 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 782344 /var/tmp/spdk2.sock 00:06:22.667 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:22.667 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 782344 ']' 00:06:22.667 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.667 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.667 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.667 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.667 20:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.667 [2024-07-15 20:01:19.966460] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:22.667 [2024-07-15 20:01:19.966513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782344 ] 00:06:22.667 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.667 [2024-07-15 20:01:20.061213] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.667 [2024-07-15 20:01:20.061245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.930 [2024-07-15 20:01:20.191341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.506 20:01:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.506 20:01:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:23.506 20:01:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 782100 00:06:23.506 20:01:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 782100 00:06:23.506 20:01:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.075 lslocks: write error 00:06:24.075 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 782100 00:06:24.075 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 782100 ']' 00:06:24.075 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 782100 00:06:24.075 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:24.075 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.075 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 782100 00:06:24.075 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.075 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.075 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 782100' 00:06:24.075 killing process with pid 782100 00:06:24.075 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 782100 00:06:24.075 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 782100 00:06:24.334 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 782344 00:06:24.334 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 782344 ']' 00:06:24.334 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 782344 00:06:24.334 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:24.334 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.334 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 782344 00:06:24.334 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.334 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.334 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 782344' 00:06:24.334 killing process with pid 782344 00:06:24.334 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 782344 00:06:24.334 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 782344 00:06:24.594 00:06:24.594 real 0m2.898s 00:06:24.594 user 0m3.119s 00:06:24.594 sys 0m0.895s 00:06:24.594 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.594 20:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.594 ************************************ 00:06:24.594 END TEST non_locking_app_on_locked_coremask 00:06:24.594 ************************************ 00:06:24.594 20:01:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:24.594 20:01:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:24.594 20:01:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.594 20:01:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.594 20:01:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.854 ************************************ 00:06:24.854 START TEST locking_app_on_unlocked_coremask 00:06:24.854 ************************************ 00:06:24.854 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:24.854 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=782805 00:06:24.854 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 782805 /var/tmp/spdk.sock 00:06:24.854 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 782805 ']' 00:06:24.854 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.854 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.854 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.854 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.854 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.854 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:24.854 [2024-07-15 20:01:22.100637] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:24.854 [2024-07-15 20:01:22.100685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782805 ] 00:06:24.854 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.854 [2024-07-15 20:01:22.159769] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.854 [2024-07-15 20:01:22.159796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.854 [2024-07-15 20:01:22.226696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.422 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.422 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:25.422 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=782835 00:06:25.422 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 782835 /var/tmp/spdk2.sock 00:06:25.422 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 782835 ']' 00:06:25.422 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.422 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.422 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.422 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.422 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.422 20:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:25.681 [2024-07-15 20:01:22.916638] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:25.681 [2024-07-15 20:01:22.916704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782835 ] 00:06:25.681 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.681 [2024-07-15 20:01:23.005944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.950 [2024-07-15 20:01:23.135629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.519 20:01:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.519 20:01:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:26.519 20:01:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 782835 00:06:26.519 20:01:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 782835 00:06:26.519 20:01:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.089 lslocks: write error 00:06:27.089 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 782805 00:06:27.089 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 782805 ']' 00:06:27.089 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 782805 00:06:27.089 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:27.089 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.089 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 782805 00:06:27.089 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.089 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.089 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 782805' 00:06:27.089 killing process with pid 782805 00:06:27.089 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 782805 00:06:27.089 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 782805 00:06:27.350 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 782835 00:06:27.350 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 782835 ']' 00:06:27.350 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 782835 00:06:27.350 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:27.350 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.350 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 782835 00:06:27.350 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.350 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.350 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 782835' 00:06:27.350 killing process with pid 782835 00:06:27.350 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 782835 00:06:27.350 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 782835 00:06:27.610 00:06:27.610 real 0m2.935s 00:06:27.610 user 0m3.208s 00:06:27.610 sys 0m0.847s 00:06:27.610 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.610 20:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.610 ************************************ 00:06:27.610 END TEST locking_app_on_unlocked_coremask 00:06:27.610 ************************************ 00:06:27.610 20:01:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:27.610 20:01:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:27.610 20:01:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.610 20:01:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.610 20:01:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.870 ************************************ 00:06:27.870 START TEST locking_app_on_locked_coremask 00:06:27.870 ************************************ 00:06:27.870 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:27.870 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=783466 00:06:27.870 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 783466 /var/tmp/spdk.sock 00:06:27.870 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.870 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 783466 ']' 00:06:27.870 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.870 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.870 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.870 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.870 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.870 [2024-07-15 20:01:25.106449] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:27.870 [2024-07-15 20:01:25.106505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783466 ] 00:06:27.870 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.870 [2024-07-15 20:01:25.166870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.870 [2024-07-15 20:01:25.237364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.441 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.441 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:28.441 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=783522 00:06:28.441 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 783522 /var/tmp/spdk2.sock 00:06:28.441 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:28.441 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:28.441 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 783522 /var/tmp/spdk2.sock 00:06:28.441 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:28.441 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.441 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:28.441 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.441 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 783522 /var/tmp/spdk2.sock 00:06:28.441 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 783522 ']' 00:06:28.441 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.441 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.441 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.441 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.441 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.702 [2024-07-15 20:01:25.925578] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:28.702 [2024-07-15 20:01:25.925642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783522 ] 00:06:28.702 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.702 [2024-07-15 20:01:26.013090] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 783466 has claimed it. 00:06:28.702 [2024-07-15 20:01:26.013137] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (783522) - No such process 00:06:29.272 ERROR: process (pid: 783522) is no longer running 00:06:29.272 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.272 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:29.272 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:29.272 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:29.272 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:29.272 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:29.272 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 783466 00:06:29.272 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 783466 00:06:29.272 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.532 lslocks: write error 00:06:29.532 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 783466 00:06:29.532 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 783466 ']' 00:06:29.532 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 783466 00:06:29.532 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:29.532 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.532 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 783466 00:06:29.532 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.532 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.532 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 783466' 00:06:29.532 killing process with pid 783466 00:06:29.532 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 783466 00:06:29.532 20:01:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 783466 00:06:29.792 00:06:29.792 real 0m2.042s 00:06:29.792 user 0m2.270s 00:06:29.792 sys 0m0.551s 00:06:29.792 20:01:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.792 20:01:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.792 ************************************ 00:06:29.792 END TEST locking_app_on_locked_coremask 00:06:29.792 ************************************ 00:06:29.792 20:01:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:29.792 20:01:27 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:29.792 20:01:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.792 20:01:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.792 20:01:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.792 ************************************ 00:06:29.792 START TEST locking_overlapped_coremask 00:06:29.792 ************************************ 00:06:29.792 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:29.792 20:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=783890 00:06:29.792 20:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 783890 /var/tmp/spdk.sock 00:06:29.792 20:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:29.792 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 783890 ']' 00:06:29.792 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.792 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.792 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.792 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.792 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.792 [2024-07-15 20:01:27.223622] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:29.792 [2024-07-15 20:01:27.223678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783890 ] 00:06:30.052 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.052 [2024-07-15 20:01:27.286281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.052 [2024-07-15 20:01:27.362374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.052 [2024-07-15 20:01:27.362548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.052 [2024-07-15 20:01:27.362551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.622 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.622 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:30.622 20:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=783967 00:06:30.622 20:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 783967 /var/tmp/spdk2.sock 00:06:30.622 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:30.622 20:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:30.622 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 783967 /var/tmp/spdk2.sock 00:06:30.622 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:30.622 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.622 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:30.622 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.622 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 783967 /var/tmp/spdk2.sock 00:06:30.622 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 783967 ']' 00:06:30.622 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.622 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.622 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.622 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.622 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.622 [2024-07-15 20:01:28.048990] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:30.622 [2024-07-15 20:01:28.049043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783967 ] 00:06:30.882 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.882 [2024-07-15 20:01:28.118674] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 783890 has claimed it. 00:06:30.882 [2024-07-15 20:01:28.118704] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:31.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (783967) - No such process 00:06:31.452 ERROR: process (pid: 783967) is no longer running 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 783890 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 783890 ']' 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 783890 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 783890 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 783890' 00:06:31.452 killing process with pid 783890 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 783890 00:06:31.452 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 783890 00:06:31.713 00:06:31.713 real 0m1.759s 00:06:31.713 user 0m4.963s 00:06:31.713 sys 0m0.368s 00:06:31.713 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.713 20:01:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.713 ************************************ 00:06:31.713 END TEST locking_overlapped_coremask 00:06:31.713 ************************************ 00:06:31.713 20:01:28 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:31.713 20:01:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:31.713 20:01:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.713 20:01:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.713 20:01:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.713 ************************************ 00:06:31.713 START TEST locking_overlapped_coremask_via_rpc 00:06:31.713 ************************************ 00:06:31.713 20:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:31.713 20:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=784257 00:06:31.713 20:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 784257 /var/tmp/spdk.sock 00:06:31.713 20:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:31.713 20:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 784257 ']' 00:06:31.714 20:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.714 20:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.714 20:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.714 20:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.714 20:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.714 [2024-07-15 20:01:29.052344] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:31.714 [2024-07-15 20:01:29.052395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784257 ] 00:06:31.714 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.714 [2024-07-15 20:01:29.111438] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.714 [2024-07-15 20:01:29.111467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.973 [2024-07-15 20:01:29.175235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.973 [2024-07-15 20:01:29.175457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.973 [2024-07-15 20:01:29.175460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.543 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.544 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:32.544 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=784423 00:06:32.544 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 784423 /var/tmp/spdk2.sock 00:06:32.544 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 784423 ']' 00:06:32.544 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:32.544 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.544 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.544 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.544 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.544 20:01:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.544 [2024-07-15 20:01:29.875876] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:32.544 [2024-07-15 20:01:29.875931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784423 ] 00:06:32.544 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.544 [2024-07-15 20:01:29.949450] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.544 [2024-07-15 20:01:29.949475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.804 [2024-07-15 20:01:30.066548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.804 [2024-07-15 20:01:30.066565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.804 [2024-07-15 20:01:30.066564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.372 [2024-07-15 20:01:30.656190] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 784257 has claimed it. 00:06:33.372 request: 00:06:33.372 { 00:06:33.372 "method": "framework_enable_cpumask_locks", 00:06:33.372 "req_id": 1 00:06:33.372 } 00:06:33.372 Got JSON-RPC error response 00:06:33.372 response: 00:06:33.372 { 00:06:33.372 "code": -32603, 00:06:33.372 "message": "Failed to claim CPU core: 2" 00:06:33.372 } 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:33.372 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:33.373 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:33.373 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:33.373 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 784257 /var/tmp/spdk.sock 00:06:33.373 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 784257 ']' 00:06:33.373 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.373 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.373 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.373 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.373 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.634 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.634 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:33.634 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 784423 /var/tmp/spdk2.sock 00:06:33.634 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 784423 ']' 00:06:33.634 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.634 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.634 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.634 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.634 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.634 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.634 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:33.634 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:33.634 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:33.634 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:33.634 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:33.634 00:06:33.634 real 0m2.009s 00:06:33.634 user 0m0.763s 00:06:33.634 sys 0m0.164s 00:06:33.634 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.634 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.634 ************************************ 00:06:33.634 END TEST locking_overlapped_coremask_via_rpc 00:06:33.634 ************************************ 00:06:33.634 20:01:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:33.634 20:01:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:33.634 20:01:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 784257 ]] 00:06:33.634 20:01:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 784257 00:06:33.634 20:01:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 784257 ']' 00:06:33.634 20:01:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 784257 00:06:33.634 20:01:31 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:33.634 20:01:31 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.634 20:01:31 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 784257 00:06:33.896 20:01:31 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.896 20:01:31 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.896 20:01:31 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 784257' 00:06:33.896 killing process with pid 784257 00:06:33.896 20:01:31 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 784257 00:06:33.896 20:01:31 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 784257 00:06:33.896 20:01:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 784423 ]] 00:06:33.896 20:01:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 784423 00:06:33.896 20:01:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 784423 ']' 00:06:33.896 20:01:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 784423 00:06:33.896 20:01:31 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:33.896 20:01:31 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.896 20:01:31 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 784423 00:06:34.157 20:01:31 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:34.157 20:01:31 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:34.157 20:01:31 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 784423' 00:06:34.157 killing process with pid 784423 00:06:34.157 20:01:31 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 784423 00:06:34.157 20:01:31 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 784423 00:06:34.157 20:01:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:34.157 20:01:31 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:34.157 20:01:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 784257 ]] 00:06:34.157 20:01:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 784257 00:06:34.157 20:01:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 784257 ']' 00:06:34.157 20:01:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 784257 00:06:34.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (784257) - No such process 00:06:34.157 20:01:31 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 784257 is not found' 00:06:34.157 Process with pid 784257 is not found 00:06:34.157 20:01:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 784423 ]] 00:06:34.158 20:01:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 784423 00:06:34.158 20:01:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 784423 ']' 00:06:34.158 20:01:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 784423 00:06:34.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (784423) - No such process 00:06:34.158 20:01:31 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 784423 is not found' 00:06:34.158 Process with pid 784423 is not found 00:06:34.158 20:01:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:34.158 00:06:34.158 real 0m15.547s 00:06:34.158 user 0m26.823s 00:06:34.158 sys 0m4.590s 00:06:34.158 20:01:31 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.158 20:01:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.158 ************************************ 00:06:34.158 END TEST cpu_locks 00:06:34.158 ************************************ 00:06:34.418 20:01:31 event -- common/autotest_common.sh@1142 -- # return 0 00:06:34.418 00:06:34.418 real 0m40.971s 00:06:34.418 user 1m19.612s 00:06:34.418 sys 0m7.674s 00:06:34.418 20:01:31 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.418 20:01:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.418 ************************************ 00:06:34.418 END TEST event 00:06:34.418 ************************************ 00:06:34.418 20:01:31 -- common/autotest_common.sh@1142 -- # return 0 00:06:34.418 20:01:31 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:34.418 20:01:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.418 20:01:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.418 20:01:31 -- common/autotest_common.sh@10 -- # set +x 00:06:34.418 ************************************ 00:06:34.418 START TEST thread 00:06:34.418 ************************************ 00:06:34.418 20:01:31 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:34.418 * Looking for test storage... 00:06:34.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:34.418 20:01:31 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:34.418 20:01:31 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:34.418 20:01:31 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.418 20:01:31 thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.418 ************************************ 00:06:34.418 START TEST thread_poller_perf 00:06:34.418 ************************************ 00:06:34.418 20:01:31 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:34.418 [2024-07-15 20:01:31.844378] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:34.418 [2024-07-15 20:01:31.844468] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785018 ] 00:06:34.678 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.678 [2024-07-15 20:01:31.907368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.678 [2024-07-15 20:01:31.971239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.678 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:35.619 ====================================== 00:06:35.619 busy:2408667984 (cyc) 00:06:35.619 total_run_count: 287000 00:06:35.619 tsc_hz: 2400000000 (cyc) 00:06:35.619 ====================================== 00:06:35.619 poller_cost: 8392 (cyc), 3496 (nsec) 00:06:35.619 00:06:35.619 real 0m1.210s 00:06:35.619 user 0m1.132s 00:06:35.619 sys 0m0.075s 00:06:35.619 20:01:33 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.619 20:01:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.619 ************************************ 00:06:35.619 END TEST thread_poller_perf 00:06:35.619 ************************************ 00:06:35.906 20:01:33 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:35.906 20:01:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:35.906 20:01:33 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:35.906 20:01:33 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.906 20:01:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.906 ************************************ 00:06:35.906 START TEST thread_poller_perf 00:06:35.906 ************************************ 00:06:35.906 20:01:33 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:35.906 [2024-07-15 20:01:33.129051] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:35.906 [2024-07-15 20:01:33.129153] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785166 ] 00:06:35.906 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.906 [2024-07-15 20:01:33.192611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.906 [2024-07-15 20:01:33.259401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.906 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:36.879 ====================================== 00:06:36.879 busy:2402161880 (cyc) 00:06:36.879 total_run_count: 3810000 00:06:36.879 tsc_hz: 2400000000 (cyc) 00:06:36.879 ====================================== 00:06:36.879 poller_cost: 630 (cyc), 262 (nsec) 00:06:37.138 00:06:37.138 real 0m1.208s 00:06:37.138 user 0m1.140s 00:06:37.138 sys 0m0.064s 00:06:37.138 20:01:34 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.138 20:01:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:37.138 ************************************ 00:06:37.138 END TEST thread_poller_perf 00:06:37.138 ************************************ 00:06:37.138 20:01:34 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:37.138 20:01:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:37.138 00:06:37.138 real 0m2.669s 00:06:37.138 user 0m2.371s 00:06:37.138 sys 0m0.306s 00:06:37.138 20:01:34 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.138 20:01:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.138 ************************************ 00:06:37.138 END TEST thread 00:06:37.138 ************************************ 00:06:37.138 20:01:34 -- common/autotest_common.sh@1142 -- # return 0 00:06:37.138 20:01:34 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:37.138 20:01:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.138 20:01:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.138 20:01:34 -- common/autotest_common.sh@10 -- # set +x 00:06:37.138 ************************************ 00:06:37.138 START TEST accel 00:06:37.138 ************************************ 00:06:37.138 20:01:34 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:37.138 * Looking for test storage... 00:06:37.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:37.138 20:01:34 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:37.138 20:01:34 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:37.138 20:01:34 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:37.138 20:01:34 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=785460 00:06:37.138 20:01:34 accel -- accel/accel.sh@63 -- # waitforlisten 785460 00:06:37.138 20:01:34 accel -- common/autotest_common.sh@829 -- # '[' -z 785460 ']' 00:06:37.138 20:01:34 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.138 20:01:34 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.138 20:01:34 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.138 20:01:34 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:37.138 20:01:34 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.138 20:01:34 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:37.138 20:01:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.138 20:01:34 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.138 20:01:34 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.138 20:01:34 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.138 20:01:34 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.138 20:01:34 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.138 20:01:34 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:37.138 20:01:34 accel -- accel/accel.sh@41 -- # jq -r . 00:06:37.397 [2024-07-15 20:01:34.589927] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:37.397 [2024-07-15 20:01:34.589993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785460 ] 00:06:37.397 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.397 [2024-07-15 20:01:34.655852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.397 [2024-07-15 20:01:34.731998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.969 20:01:35 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.969 20:01:35 accel -- common/autotest_common.sh@862 -- # return 0 00:06:37.969 20:01:35 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:37.969 20:01:35 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:37.969 20:01:35 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:37.969 20:01:35 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:37.969 20:01:35 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:37.969 20:01:35 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:37.969 20:01:35 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.969 20:01:35 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:37.969 20:01:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.969 20:01:35 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.230 20:01:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.230 20:01:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.230 20:01:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.230 20:01:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.230 20:01:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.230 20:01:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.230 20:01:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.230 20:01:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.230 20:01:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.230 20:01:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.230 20:01:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.230 20:01:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.230 20:01:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.230 20:01:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.230 20:01:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.230 20:01:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.230 20:01:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.230 20:01:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.230 20:01:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.230 20:01:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.230 20:01:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.230 20:01:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.230 20:01:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.230 20:01:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.230 20:01:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.230 20:01:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.230 20:01:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.231 20:01:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.231 20:01:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.231 20:01:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.231 20:01:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.231 20:01:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.231 20:01:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.231 20:01:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.231 20:01:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.231 20:01:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.231 20:01:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.231 20:01:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.231 20:01:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.231 20:01:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.231 20:01:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.231 20:01:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.231 20:01:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.231 20:01:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.231 20:01:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.231 20:01:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.231 20:01:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.231 20:01:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.231 20:01:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.231 20:01:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.231 20:01:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.231 20:01:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.231 20:01:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.231 20:01:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.231 20:01:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.231 20:01:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.231 20:01:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.231 20:01:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.231 20:01:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.231 20:01:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.231 20:01:35 accel -- accel/accel.sh@75 -- # killprocess 785460 00:06:38.231 20:01:35 accel -- common/autotest_common.sh@948 -- # '[' -z 785460 ']' 00:06:38.231 20:01:35 accel -- common/autotest_common.sh@952 -- # kill -0 785460 00:06:38.231 20:01:35 accel -- common/autotest_common.sh@953 -- # uname 00:06:38.231 20:01:35 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.231 20:01:35 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 785460 00:06:38.231 20:01:35 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.231 20:01:35 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.231 20:01:35 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 785460' 00:06:38.231 killing process with pid 785460 00:06:38.231 20:01:35 accel -- common/autotest_common.sh@967 -- # kill 785460 00:06:38.231 20:01:35 accel -- common/autotest_common.sh@972 -- # wait 785460 00:06:38.492 20:01:35 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:38.492 20:01:35 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:38.492 20:01:35 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:38.492 20:01:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.492 20:01:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.492 20:01:35 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:38.492 20:01:35 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:38.492 20:01:35 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:38.492 20:01:35 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.492 20:01:35 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.492 20:01:35 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.492 20:01:35 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.492 20:01:35 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.492 20:01:35 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:38.492 20:01:35 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:38.492 20:01:35 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.492 20:01:35 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:38.492 20:01:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.492 20:01:35 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:38.492 20:01:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:38.492 20:01:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.492 20:01:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.492 ************************************ 00:06:38.492 START TEST accel_missing_filename 00:06:38.492 ************************************ 00:06:38.492 20:01:35 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:38.492 20:01:35 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:38.492 20:01:35 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:38.492 20:01:35 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:38.492 20:01:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.492 20:01:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:38.492 20:01:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.492 20:01:35 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:38.492 20:01:35 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:38.492 20:01:35 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:38.492 20:01:35 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.492 20:01:35 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.492 20:01:35 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.492 20:01:35 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.492 20:01:35 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.492 20:01:35 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:38.492 20:01:35 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:38.492 [2024-07-15 20:01:35.837960] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:38.492 [2024-07-15 20:01:35.838055] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785828 ] 00:06:38.492 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.492 [2024-07-15 20:01:35.900695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.753 [2024-07-15 20:01:35.965058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.753 [2024-07-15 20:01:35.996843] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.753 [2024-07-15 20:01:36.033542] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:38.753 A filename is required. 00:06:38.753 20:01:36 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:38.753 20:01:36 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:38.753 20:01:36 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:38.753 20:01:36 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:38.753 20:01:36 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:38.753 20:01:36 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:38.753 00:06:38.753 real 0m0.280s 00:06:38.753 user 0m0.218s 00:06:38.753 sys 0m0.103s 00:06:38.753 20:01:36 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.753 20:01:36 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:38.753 ************************************ 00:06:38.753 END TEST accel_missing_filename 00:06:38.753 ************************************ 00:06:38.753 20:01:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.753 20:01:36 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.753 20:01:36 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:38.753 20:01:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.753 20:01:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.753 ************************************ 00:06:38.753 START TEST accel_compress_verify 00:06:38.753 ************************************ 00:06:38.753 20:01:36 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.753 20:01:36 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:38.753 20:01:36 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.753 20:01:36 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:38.753 20:01:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.753 20:01:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:38.753 20:01:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.753 20:01:36 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.753 20:01:36 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.753 20:01:36 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:38.753 20:01:36 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.753 20:01:36 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.753 20:01:36 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.753 20:01:36 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.753 20:01:36 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.753 20:01:36 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:38.753 20:01:36 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:39.013 [2024-07-15 20:01:36.190173] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:39.013 [2024-07-15 20:01:36.190274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785853 ] 00:06:39.013 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.013 [2024-07-15 20:01:36.250932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.013 [2024-07-15 20:01:36.313378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.013 [2024-07-15 20:01:36.345375] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:39.013 [2024-07-15 20:01:36.382359] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:39.013 00:06:39.013 Compression does not support the verify option, aborting. 00:06:39.013 20:01:36 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:39.013 20:01:36 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.014 20:01:36 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:39.014 20:01:36 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:39.014 20:01:36 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:39.014 20:01:36 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.014 00:06:39.014 real 0m0.277s 00:06:39.014 user 0m0.217s 00:06:39.014 sys 0m0.102s 00:06:39.014 20:01:36 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.014 20:01:36 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:39.014 ************************************ 00:06:39.014 END TEST accel_compress_verify 00:06:39.014 ************************************ 00:06:39.274 20:01:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.274 20:01:36 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:39.274 20:01:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:39.274 20:01:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.274 20:01:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.274 ************************************ 00:06:39.274 START TEST accel_wrong_workload 00:06:39.274 ************************************ 00:06:39.274 20:01:36 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:39.274 20:01:36 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:39.274 20:01:36 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:39.274 20:01:36 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:39.274 20:01:36 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.274 20:01:36 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:39.274 20:01:36 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.274 20:01:36 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:39.274 20:01:36 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:39.274 20:01:36 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:39.274 20:01:36 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.274 20:01:36 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.274 20:01:36 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.274 20:01:36 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.274 20:01:36 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.274 20:01:36 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:39.274 20:01:36 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:39.274 Unsupported workload type: foobar 00:06:39.274 [2024-07-15 20:01:36.537753] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:39.274 accel_perf options: 00:06:39.274 [-h help message] 00:06:39.274 [-q queue depth per core] 00:06:39.274 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:39.274 [-T number of threads per core 00:06:39.274 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:39.274 [-t time in seconds] 00:06:39.274 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:39.274 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:39.274 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:39.274 [-l for compress/decompress workloads, name of uncompressed input file 00:06:39.274 [-S for crc32c workload, use this seed value (default 0) 00:06:39.274 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:39.274 [-f for fill workload, use this BYTE value (default 255) 00:06:39.274 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:39.274 [-y verify result if this switch is on] 00:06:39.274 [-a tasks to allocate per core (default: same value as -q)] 00:06:39.274 Can be used to spread operations across a wider range of memory. 00:06:39.274 20:01:36 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:39.274 20:01:36 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.274 20:01:36 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:39.274 20:01:36 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.274 00:06:39.274 real 0m0.036s 00:06:39.274 user 0m0.023s 00:06:39.274 sys 0m0.013s 00:06:39.274 20:01:36 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.274 20:01:36 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:39.274 ************************************ 00:06:39.274 END TEST accel_wrong_workload 00:06:39.274 ************************************ 00:06:39.274 Error: writing output failed: Broken pipe 00:06:39.274 20:01:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.274 20:01:36 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:39.274 20:01:36 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:39.274 20:01:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.274 20:01:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.274 ************************************ 00:06:39.274 START TEST accel_negative_buffers 00:06:39.274 ************************************ 00:06:39.274 20:01:36 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:39.274 20:01:36 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:39.274 20:01:36 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:39.274 20:01:36 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:39.274 20:01:36 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.274 20:01:36 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:39.274 20:01:36 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.274 20:01:36 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:39.274 20:01:36 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:39.274 20:01:36 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:39.274 20:01:36 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.274 20:01:36 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.274 20:01:36 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.274 20:01:36 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.274 20:01:36 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.274 20:01:36 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:39.274 20:01:36 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:39.274 -x option must be non-negative. 00:06:39.274 [2024-07-15 20:01:36.652274] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:39.274 accel_perf options: 00:06:39.274 [-h help message] 00:06:39.274 [-q queue depth per core] 00:06:39.274 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:39.274 [-T number of threads per core 00:06:39.274 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:39.274 [-t time in seconds] 00:06:39.274 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:39.274 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:39.274 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:39.274 [-l for compress/decompress workloads, name of uncompressed input file 00:06:39.274 [-S for crc32c workload, use this seed value (default 0) 00:06:39.274 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:39.274 [-f for fill workload, use this BYTE value (default 255) 00:06:39.274 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:39.274 [-y verify result if this switch is on] 00:06:39.274 [-a tasks to allocate per core (default: same value as -q)] 00:06:39.274 Can be used to spread operations across a wider range of memory. 00:06:39.274 20:01:36 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:39.274 20:01:36 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.274 20:01:36 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:39.274 20:01:36 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.274 00:06:39.274 real 0m0.037s 00:06:39.274 user 0m0.021s 00:06:39.274 sys 0m0.015s 00:06:39.274 20:01:36 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.274 20:01:36 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:39.274 ************************************ 00:06:39.274 END TEST accel_negative_buffers 00:06:39.274 ************************************ 00:06:39.274 Error: writing output failed: Broken pipe 00:06:39.274 20:01:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.274 20:01:36 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:39.274 20:01:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:39.274 20:01:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.274 20:01:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.534 ************************************ 00:06:39.534 START TEST accel_crc32c 00:06:39.534 ************************************ 00:06:39.534 20:01:36 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:39.534 20:01:36 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:39.534 20:01:36 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:39.534 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.534 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.534 20:01:36 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:39.535 [2024-07-15 20:01:36.764818] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:39.535 [2024-07-15 20:01:36.764898] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786124 ] 00:06:39.535 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.535 [2024-07-15 20:01:36.826346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.535 [2024-07-15 20:01:36.892452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.535 20:01:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:40.920 20:01:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.920 00:06:40.920 real 0m1.284s 00:06:40.920 user 0m1.201s 00:06:40.920 sys 0m0.096s 00:06:40.920 20:01:38 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.920 20:01:38 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:40.920 ************************************ 00:06:40.920 END TEST accel_crc32c 00:06:40.920 ************************************ 00:06:40.920 20:01:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.920 20:01:38 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:40.920 20:01:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:40.920 20:01:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.920 20:01:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.920 ************************************ 00:06:40.920 START TEST accel_crc32c_C2 00:06:40.920 ************************************ 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:40.920 [2024-07-15 20:01:38.126384] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:40.920 [2024-07-15 20:01:38.126480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786314 ] 00:06:40.920 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.920 [2024-07-15 20:01:38.187974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.920 [2024-07-15 20:01:38.253382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.920 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.921 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.921 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.921 20:01:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.304 00:06:42.304 real 0m1.284s 00:06:42.304 user 0m1.199s 00:06:42.304 sys 0m0.097s 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.304 20:01:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:42.304 ************************************ 00:06:42.304 END TEST accel_crc32c_C2 00:06:42.304 ************************************ 00:06:42.304 20:01:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.304 20:01:39 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:42.304 20:01:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:42.304 20:01:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.304 20:01:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.304 ************************************ 00:06:42.304 START TEST accel_copy 00:06:42.304 ************************************ 00:06:42.304 20:01:39 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:42.304 [2024-07-15 20:01:39.487554] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:42.304 [2024-07-15 20:01:39.487623] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786623 ] 00:06:42.304 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.304 [2024-07-15 20:01:39.548565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.304 [2024-07-15 20:01:39.615041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.304 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.305 20:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.305 20:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.305 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.305 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.305 20:01:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.305 20:01:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.305 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.305 20:01:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:43.701 20:01:40 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.701 00:06:43.701 real 0m1.286s 00:06:43.701 user 0m1.196s 00:06:43.701 sys 0m0.101s 00:06:43.701 20:01:40 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.701 20:01:40 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:43.701 ************************************ 00:06:43.701 END TEST accel_copy 00:06:43.701 ************************************ 00:06:43.701 20:01:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.701 20:01:40 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:43.701 20:01:40 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:43.701 20:01:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.701 20:01:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.701 ************************************ 00:06:43.701 START TEST accel_fill 00:06:43.701 ************************************ 00:06:43.701 20:01:40 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:43.701 20:01:40 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:43.701 20:01:40 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:43.701 20:01:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.701 20:01:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.701 20:01:40 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:43.701 20:01:40 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:43.701 20:01:40 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:43.701 20:01:40 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.701 20:01:40 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.701 20:01:40 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.701 20:01:40 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.701 20:01:40 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.701 20:01:40 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:43.701 20:01:40 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:43.701 [2024-07-15 20:01:40.846018] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:43.701 [2024-07-15 20:01:40.846082] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786970 ] 00:06:43.701 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.701 [2024-07-15 20:01:40.908992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.701 [2024-07-15 20:01:40.978242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.701 20:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.701 20:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.701 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.701 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.701 20:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.701 20:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.701 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.701 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.701 20:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.702 20:01:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:45.089 20:01:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.089 00:06:45.089 real 0m1.289s 00:06:45.089 user 0m1.193s 00:06:45.089 sys 0m0.106s 00:06:45.089 20:01:42 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.089 20:01:42 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:45.089 ************************************ 00:06:45.089 END TEST accel_fill 00:06:45.089 ************************************ 00:06:45.089 20:01:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.089 20:01:42 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:45.089 20:01:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:45.089 20:01:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.089 20:01:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.089 ************************************ 00:06:45.089 START TEST accel_copy_crc32c 00:06:45.089 ************************************ 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:45.089 [2024-07-15 20:01:42.207864] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:45.089 [2024-07-15 20:01:42.207956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787319 ] 00:06:45.089 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.089 [2024-07-15 20:01:42.268941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.089 [2024-07-15 20:01:42.332666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.089 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.090 20:01:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.030 00:06:46.030 real 0m1.280s 00:06:46.030 user 0m1.191s 00:06:46.030 sys 0m0.101s 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.030 20:01:43 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:46.030 ************************************ 00:06:46.030 END TEST accel_copy_crc32c 00:06:46.030 ************************************ 00:06:46.290 20:01:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.290 20:01:43 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:46.290 20:01:43 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:46.290 20:01:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.290 20:01:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.290 ************************************ 00:06:46.290 START TEST accel_copy_crc32c_C2 00:06:46.290 ************************************ 00:06:46.290 20:01:43 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:46.290 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.290 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:46.290 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.290 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.290 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:46.290 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:46.290 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.290 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.290 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.290 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.290 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.290 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.290 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:46.290 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:46.290 [2024-07-15 20:01:43.566671] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:46.290 [2024-07-15 20:01:43.566732] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787589 ] 00:06:46.290 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.290 [2024-07-15 20:01:43.627360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.290 [2024-07-15 20:01:43.693141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.550 20:01:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.494 00:06:47.494 real 0m1.288s 00:06:47.494 user 0m1.195s 00:06:47.494 sys 0m0.105s 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.494 20:01:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:47.494 ************************************ 00:06:47.494 END TEST accel_copy_crc32c_C2 00:06:47.494 ************************************ 00:06:47.494 20:01:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.494 20:01:44 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:47.494 20:01:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:47.494 20:01:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.494 20:01:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.494 ************************************ 00:06:47.494 START TEST accel_dualcast 00:06:47.494 ************************************ 00:06:47.494 20:01:44 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:47.494 20:01:44 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:47.494 20:01:44 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:47.494 20:01:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:47.494 20:01:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:47.494 20:01:44 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:47.494 20:01:44 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:47.494 20:01:44 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:47.494 20:01:44 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.494 20:01:44 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.494 20:01:44 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.494 20:01:44 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.494 20:01:44 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.494 20:01:44 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:47.494 20:01:44 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:47.754 [2024-07-15 20:01:44.931111] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:47.754 [2024-07-15 20:01:44.931208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787783 ] 00:06:47.754 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.754 [2024-07-15 20:01:44.994847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.754 [2024-07-15 20:01:45.066876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:47.754 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:47.755 20:01:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:49.133 20:01:46 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.133 00:06:49.133 real 0m1.294s 00:06:49.133 user 0m1.196s 00:06:49.133 sys 0m0.108s 00:06:49.133 20:01:46 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.133 20:01:46 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:49.133 ************************************ 00:06:49.133 END TEST accel_dualcast 00:06:49.133 ************************************ 00:06:49.133 20:01:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.133 20:01:46 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:49.133 20:01:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:49.133 20:01:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.133 20:01:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.133 ************************************ 00:06:49.133 START TEST accel_compare 00:06:49.133 ************************************ 00:06:49.133 20:01:46 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:49.133 [2024-07-15 20:01:46.300148] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:49.133 [2024-07-15 20:01:46.300213] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788058 ] 00:06:49.133 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.133 [2024-07-15 20:01:46.370235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.133 [2024-07-15 20:01:46.435734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.133 20:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:50.514 20:01:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.514 00:06:50.514 real 0m1.292s 00:06:50.514 user 0m1.199s 00:06:50.514 sys 0m0.105s 00:06:50.514 20:01:47 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.514 20:01:47 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:50.514 ************************************ 00:06:50.514 END TEST accel_compare 00:06:50.514 ************************************ 00:06:50.514 20:01:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.514 20:01:47 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:50.514 20:01:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:50.514 20:01:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.514 20:01:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.514 ************************************ 00:06:50.514 START TEST accel_xor 00:06:50.514 ************************************ 00:06:50.514 20:01:47 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:50.514 [2024-07-15 20:01:47.669169] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:50.514 [2024-07-15 20:01:47.669234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788413 ] 00:06:50.514 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.514 [2024-07-15 20:01:47.729560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.514 [2024-07-15 20:01:47.794675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.514 20:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.898 00:06:51.898 real 0m1.281s 00:06:51.898 user 0m1.192s 00:06:51.898 sys 0m0.100s 00:06:51.898 20:01:48 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.898 20:01:48 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:51.898 ************************************ 00:06:51.898 END TEST accel_xor 00:06:51.898 ************************************ 00:06:51.898 20:01:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.898 20:01:48 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:51.898 20:01:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:51.898 20:01:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.898 20:01:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.898 ************************************ 00:06:51.898 START TEST accel_xor 00:06:51.898 ************************************ 00:06:51.898 20:01:48 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.898 20:01:48 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:51.898 20:01:49 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:51.898 20:01:49 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:51.898 20:01:49 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.898 20:01:49 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.898 20:01:49 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.898 20:01:49 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.898 20:01:49 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.898 20:01:49 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:51.898 20:01:49 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:51.899 [2024-07-15 20:01:49.027682] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:51.899 [2024-07-15 20:01:49.027749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788760 ] 00:06:51.899 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.899 [2024-07-15 20:01:49.090940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.899 [2024-07-15 20:01:49.157515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 20:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:53.318 20:01:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.318 00:06:53.318 real 0m1.289s 00:06:53.318 user 0m1.194s 00:06:53.318 sys 0m0.106s 00:06:53.318 20:01:50 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.318 20:01:50 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:53.318 ************************************ 00:06:53.318 END TEST accel_xor 00:06:53.318 ************************************ 00:06:53.318 20:01:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:53.318 20:01:50 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:53.318 20:01:50 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:53.318 20:01:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.318 20:01:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.318 ************************************ 00:06:53.318 START TEST accel_dif_verify 00:06:53.319 ************************************ 00:06:53.319 20:01:50 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:53.319 [2024-07-15 20:01:50.391992] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:53.319 [2024-07-15 20:01:50.392057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789076 ] 00:06:53.319 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.319 [2024-07-15 20:01:50.452720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.319 [2024-07-15 20:01:50.518891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.319 20:01:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:54.260 20:01:51 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.260 00:06:54.260 real 0m1.285s 00:06:54.260 user 0m1.206s 00:06:54.260 sys 0m0.092s 00:06:54.260 20:01:51 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.260 20:01:51 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:54.260 ************************************ 00:06:54.260 END TEST accel_dif_verify 00:06:54.260 ************************************ 00:06:54.260 20:01:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.260 20:01:51 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:54.260 20:01:51 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:54.260 20:01:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.260 20:01:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.520 ************************************ 00:06:54.520 START TEST accel_dif_generate 00:06:54.520 ************************************ 00:06:54.520 20:01:51 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:54.520 [2024-07-15 20:01:51.753654] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:54.520 [2024-07-15 20:01:51.753720] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789268 ] 00:06:54.520 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.520 [2024-07-15 20:01:51.815047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.520 [2024-07-15 20:01:51.882226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.520 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.521 20:01:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:55.931 20:01:53 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.931 00:06:55.931 real 0m1.286s 00:06:55.931 user 0m1.197s 00:06:55.931 sys 0m0.103s 00:06:55.931 20:01:53 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.931 20:01:53 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:55.931 ************************************ 00:06:55.931 END TEST accel_dif_generate 00:06:55.931 ************************************ 00:06:55.931 20:01:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.931 20:01:53 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:55.931 20:01:53 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:55.931 20:01:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.931 20:01:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.931 ************************************ 00:06:55.931 START TEST accel_dif_generate_copy 00:06:55.931 ************************************ 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:55.931 [2024-07-15 20:01:53.117798] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:55.931 [2024-07-15 20:01:53.117891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789499 ] 00:06:55.931 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.931 [2024-07-15 20:01:53.179640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.931 [2024-07-15 20:01:53.245269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.931 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.932 20:01:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.325 00:06:57.325 real 0m1.287s 00:06:57.325 user 0m1.203s 00:06:57.325 sys 0m0.095s 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.325 20:01:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:57.325 ************************************ 00:06:57.325 END TEST accel_dif_generate_copy 00:06:57.325 ************************************ 00:06:57.325 20:01:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.325 20:01:54 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:57.325 20:01:54 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.325 20:01:54 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:57.325 20:01:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.325 20:01:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.325 ************************************ 00:06:57.325 START TEST accel_comp 00:06:57.325 ************************************ 00:06:57.325 20:01:54 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:57.325 [2024-07-15 20:01:54.482497] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:57.325 [2024-07-15 20:01:54.482598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789852 ] 00:06:57.325 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.325 [2024-07-15 20:01:54.544542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.325 [2024-07-15 20:01:54.611245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.325 20:01:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.326 20:01:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:58.707 20:01:55 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.707 00:06:58.707 real 0m1.291s 00:06:58.707 user 0m1.210s 00:06:58.707 sys 0m0.094s 00:06:58.707 20:01:55 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.707 20:01:55 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:58.707 ************************************ 00:06:58.707 END TEST accel_comp 00:06:58.707 ************************************ 00:06:58.707 20:01:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.707 20:01:55 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.707 20:01:55 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:58.707 20:01:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.707 20:01:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.707 ************************************ 00:06:58.707 START TEST accel_decomp 00:06:58.707 ************************************ 00:06:58.707 20:01:55 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.707 20:01:55 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:58.707 20:01:55 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:58.707 20:01:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.707 20:01:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.708 20:01:55 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.708 20:01:55 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.708 20:01:55 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:58.708 20:01:55 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.708 20:01:55 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.708 20:01:55 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.708 20:01:55 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.708 20:01:55 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.708 20:01:55 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:58.708 20:01:55 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:58.708 [2024-07-15 20:01:55.848645] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:06:58.708 [2024-07-15 20:01:55.848738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790206 ] 00:06:58.708 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.708 [2024-07-15 20:01:55.909716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.708 [2024-07-15 20:01:55.975724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.708 20:01:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:00.086 20:01:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.086 00:07:00.086 real 0m1.289s 00:07:00.086 user 0m1.204s 00:07:00.086 sys 0m0.097s 00:07:00.086 20:01:57 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.086 20:01:57 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:00.086 ************************************ 00:07:00.086 END TEST accel_decomp 00:07:00.086 ************************************ 00:07:00.086 20:01:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.086 20:01:57 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:00.086 20:01:57 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:00.086 20:01:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.086 20:01:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.086 ************************************ 00:07:00.086 START TEST accel_decomp_full 00:07:00.086 ************************************ 00:07:00.086 20:01:57 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:00.086 [2024-07-15 20:01:57.213681] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:07:00.086 [2024-07-15 20:01:57.213772] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790548 ] 00:07:00.086 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.086 [2024-07-15 20:01:57.286167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.086 [2024-07-15 20:01:57.358098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.086 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.087 20:01:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:01.470 20:01:58 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.470 00:07:01.470 real 0m1.320s 00:07:01.470 user 0m1.215s 00:07:01.470 sys 0m0.119s 00:07:01.470 20:01:58 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.470 20:01:58 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:01.470 ************************************ 00:07:01.470 END TEST accel_decomp_full 00:07:01.470 ************************************ 00:07:01.470 20:01:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.470 20:01:58 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:01.470 20:01:58 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:01.470 20:01:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.470 20:01:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.470 ************************************ 00:07:01.470 START TEST accel_decomp_mcore 00:07:01.470 ************************************ 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:01.470 [2024-07-15 20:01:58.610776] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:07:01.470 [2024-07-15 20:01:58.610866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790753 ] 00:07:01.470 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.470 [2024-07-15 20:01:58.674673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.470 [2024-07-15 20:01:58.746884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.470 [2024-07-15 20:01:58.747001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.470 [2024-07-15 20:01:58.747194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.470 [2024-07-15 20:01:58.747193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:01.470 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.471 20:01:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.884 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.884 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.884 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.884 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.884 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.884 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.884 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.884 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.884 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.884 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.884 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.884 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.884 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.884 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.884 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.884 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.884 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.885 00:07:02.885 real 0m1.304s 00:07:02.885 user 0m4.438s 00:07:02.885 sys 0m0.113s 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.885 20:01:59 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:02.885 ************************************ 00:07:02.885 END TEST accel_decomp_mcore 00:07:02.885 ************************************ 00:07:02.885 20:01:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.885 20:01:59 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:02.885 20:01:59 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:02.885 20:01:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.885 20:01:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.885 ************************************ 00:07:02.885 START TEST accel_decomp_full_mcore 00:07:02.885 ************************************ 00:07:02.885 20:01:59 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:02.885 20:01:59 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:02.885 20:01:59 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:02.885 20:01:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:01:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:01:59 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:02.885 20:01:59 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:02.885 20:01:59 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:02.885 20:01:59 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.885 20:01:59 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.885 20:01:59 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.885 20:01:59 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.885 20:01:59 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.885 20:01:59 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:02.885 20:01:59 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:02.885 [2024-07-15 20:01:59.996119] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:07:02.885 [2024-07-15 20:01:59.996239] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790965 ] 00:07:02.885 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.885 [2024-07-15 20:02:00.065255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.885 [2024-07-15 20:02:00.141071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.885 [2024-07-15 20:02:00.141209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.885 [2024-07-15 20:02:00.141269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.885 [2024-07-15 20:02:00.141479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.885 20:02:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.270 00:07:04.270 real 0m1.331s 00:07:04.270 user 0m4.493s 00:07:04.270 sys 0m0.120s 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.270 20:02:01 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:04.270 ************************************ 00:07:04.270 END TEST accel_decomp_full_mcore 00:07:04.270 ************************************ 00:07:04.270 20:02:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.270 20:02:01 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:04.270 20:02:01 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:04.270 20:02:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.270 20:02:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.270 ************************************ 00:07:04.270 START TEST accel_decomp_mthread 00:07:04.270 ************************************ 00:07:04.270 20:02:01 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:04.270 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:04.270 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:04.270 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.270 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.270 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:04.270 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:04.270 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:04.270 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.270 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.270 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.270 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.270 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.270 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:04.270 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:04.270 [2024-07-15 20:02:01.402140] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:07:04.270 [2024-07-15 20:02:01.402235] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid791301 ] 00:07:04.270 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.270 [2024-07-15 20:02:01.468020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.271 [2024-07-15 20:02:01.532076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.271 20:02:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.655 00:07:05.655 real 0m1.300s 00:07:05.655 user 0m1.206s 00:07:05.655 sys 0m0.106s 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.655 20:02:02 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:05.655 ************************************ 00:07:05.655 END TEST accel_decomp_mthread 00:07:05.655 ************************************ 00:07:05.655 20:02:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.655 20:02:02 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.655 20:02:02 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:05.655 20:02:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.655 20:02:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.655 ************************************ 00:07:05.655 START TEST accel_decomp_full_mthread 00:07:05.655 ************************************ 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:05.655 [2024-07-15 20:02:02.771443] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:07:05.655 [2024-07-15 20:02:02.771528] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid791650 ] 00:07:05.655 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.655 [2024-07-15 20:02:02.832319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.655 [2024-07-15 20:02:02.897075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:05.655 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.656 20:02:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.039 00:07:07.039 real 0m1.315s 00:07:07.039 user 0m1.222s 00:07:07.039 sys 0m0.106s 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.039 20:02:04 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:07.039 ************************************ 00:07:07.039 END TEST accel_decomp_full_mthread 00:07:07.039 ************************************ 00:07:07.039 20:02:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.039 20:02:04 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:07.039 20:02:04 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:07.039 20:02:04 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:07.039 20:02:04 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:07.039 20:02:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.039 20:02:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.039 20:02:04 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.039 20:02:04 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.039 20:02:04 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.039 20:02:04 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.039 20:02:04 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.039 20:02:04 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:07.039 20:02:04 accel -- accel/accel.sh@41 -- # jq -r . 00:07:07.039 ************************************ 00:07:07.039 START TEST accel_dif_functional_tests 00:07:07.039 ************************************ 00:07:07.039 20:02:04 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:07.039 [2024-07-15 20:02:04.184588] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:07:07.039 [2024-07-15 20:02:04.184635] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792004 ] 00:07:07.039 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.039 [2024-07-15 20:02:04.243206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.039 [2024-07-15 20:02:04.310453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.039 [2024-07-15 20:02:04.310571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.039 [2024-07-15 20:02:04.310574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.039 00:07:07.039 00:07:07.039 CUnit - A unit testing framework for C - Version 2.1-3 00:07:07.039 http://cunit.sourceforge.net/ 00:07:07.039 00:07:07.039 00:07:07.039 Suite: accel_dif 00:07:07.039 Test: verify: DIF generated, GUARD check ...passed 00:07:07.039 Test: verify: DIF generated, APPTAG check ...passed 00:07:07.039 Test: verify: DIF generated, REFTAG check ...passed 00:07:07.039 Test: verify: DIF not generated, GUARD check ...[2024-07-15 20:02:04.365802] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:07.039 passed 00:07:07.039 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 20:02:04.365846] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:07.039 passed 00:07:07.039 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 20:02:04.365868] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:07.039 passed 00:07:07.039 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:07.039 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 20:02:04.365916] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:07.039 passed 00:07:07.039 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:07.039 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:07.039 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:07.039 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 20:02:04.366028] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:07.039 passed 00:07:07.039 Test: verify copy: DIF generated, GUARD check ...passed 00:07:07.039 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:07.039 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:07.039 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 20:02:04.366153] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:07.039 passed 00:07:07.039 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 20:02:04.366177] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:07.039 passed 00:07:07.039 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 20:02:04.366197] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:07.039 passed 00:07:07.039 Test: generate copy: DIF generated, GUARD check ...passed 00:07:07.039 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:07.039 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:07.039 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:07.039 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:07.040 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:07.040 Test: generate copy: iovecs-len validate ...[2024-07-15 20:02:04.366381] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:07.040 passed 00:07:07.040 Test: generate copy: buffer alignment validate ...passed 00:07:07.040 00:07:07.040 Run Summary: Type Total Ran Passed Failed Inactive 00:07:07.040 suites 1 1 n/a 0 0 00:07:07.040 tests 26 26 26 0 0 00:07:07.040 asserts 115 115 115 0 n/a 00:07:07.040 00:07:07.040 Elapsed time = 0.002 seconds 00:07:07.299 00:07:07.299 real 0m0.346s 00:07:07.299 user 0m0.489s 00:07:07.299 sys 0m0.118s 00:07:07.299 20:02:04 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.299 20:02:04 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:07.299 ************************************ 00:07:07.299 END TEST accel_dif_functional_tests 00:07:07.299 ************************************ 00:07:07.299 20:02:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.299 00:07:07.299 real 0m30.093s 00:07:07.299 user 0m33.699s 00:07:07.299 sys 0m4.149s 00:07:07.299 20:02:04 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.299 20:02:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.299 ************************************ 00:07:07.299 END TEST accel 00:07:07.299 ************************************ 00:07:07.299 20:02:04 -- common/autotest_common.sh@1142 -- # return 0 00:07:07.299 20:02:04 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:07.299 20:02:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.299 20:02:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.299 20:02:04 -- common/autotest_common.sh@10 -- # set +x 00:07:07.299 ************************************ 00:07:07.299 START TEST accel_rpc 00:07:07.299 ************************************ 00:07:07.299 20:02:04 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:07.299 * Looking for test storage... 00:07:07.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:07.299 20:02:04 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:07.299 20:02:04 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=792073 00:07:07.299 20:02:04 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 792073 00:07:07.299 20:02:04 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:07.299 20:02:04 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 792073 ']' 00:07:07.299 20:02:04 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.299 20:02:04 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.299 20:02:04 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.299 20:02:04 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.299 20:02:04 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.560 [2024-07-15 20:02:04.748330] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:07:07.560 [2024-07-15 20:02:04.748396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792073 ] 00:07:07.560 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.560 [2024-07-15 20:02:04.811191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.560 [2024-07-15 20:02:04.885606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.134 20:02:05 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.134 20:02:05 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:08.134 20:02:05 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:08.134 20:02:05 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:08.134 20:02:05 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:08.134 20:02:05 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:08.134 20:02:05 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:08.134 20:02:05 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.134 20:02:05 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.134 20:02:05 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.134 ************************************ 00:07:08.134 START TEST accel_assign_opcode 00:07:08.134 ************************************ 00:07:08.134 20:02:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:08.134 20:02:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:08.134 20:02:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.134 20:02:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:08.134 [2024-07-15 20:02:05.543529] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:08.134 20:02:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.134 20:02:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:08.134 20:02:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.134 20:02:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:08.134 [2024-07-15 20:02:05.555555] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:08.134 20:02:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.134 20:02:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:08.134 20:02:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.134 20:02:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:08.393 20:02:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.393 20:02:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:08.393 20:02:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:08.393 20:02:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.393 20:02:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:08.393 20:02:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:08.393 20:02:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.393 software 00:07:08.393 00:07:08.393 real 0m0.211s 00:07:08.393 user 0m0.049s 00:07:08.393 sys 0m0.011s 00:07:08.393 20:02:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.393 20:02:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:08.393 ************************************ 00:07:08.393 END TEST accel_assign_opcode 00:07:08.393 ************************************ 00:07:08.393 20:02:05 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:08.393 20:02:05 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 792073 00:07:08.393 20:02:05 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 792073 ']' 00:07:08.393 20:02:05 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 792073 00:07:08.393 20:02:05 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:08.393 20:02:05 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:08.393 20:02:05 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 792073 00:07:08.652 20:02:05 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:08.652 20:02:05 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:08.652 20:02:05 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 792073' 00:07:08.652 killing process with pid 792073 00:07:08.652 20:02:05 accel_rpc -- common/autotest_common.sh@967 -- # kill 792073 00:07:08.652 20:02:05 accel_rpc -- common/autotest_common.sh@972 -- # wait 792073 00:07:08.652 00:07:08.652 real 0m1.451s 00:07:08.652 user 0m1.527s 00:07:08.652 sys 0m0.403s 00:07:08.652 20:02:06 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.652 20:02:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.652 ************************************ 00:07:08.652 END TEST accel_rpc 00:07:08.652 ************************************ 00:07:08.652 20:02:06 -- common/autotest_common.sh@1142 -- # return 0 00:07:08.652 20:02:06 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:08.652 20:02:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.652 20:02:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.912 20:02:06 -- common/autotest_common.sh@10 -- # set +x 00:07:08.912 ************************************ 00:07:08.912 START TEST app_cmdline 00:07:08.912 ************************************ 00:07:08.912 20:02:06 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:08.912 * Looking for test storage... 00:07:08.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:08.912 20:02:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:08.912 20:02:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=792483 00:07:08.912 20:02:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 792483 00:07:08.912 20:02:06 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:08.912 20:02:06 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 792483 ']' 00:07:08.912 20:02:06 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.912 20:02:06 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.912 20:02:06 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.912 20:02:06 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.913 20:02:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:08.913 [2024-07-15 20:02:06.281727] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:07:08.913 [2024-07-15 20:02:06.281781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792483 ] 00:07:08.913 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.913 [2024-07-15 20:02:06.341160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.174 [2024-07-15 20:02:06.405027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.747 20:02:07 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.747 20:02:07 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:09.747 20:02:07 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:10.009 { 00:07:10.009 "version": "SPDK v24.09-pre git sha1 35c1e586c", 00:07:10.009 "fields": { 00:07:10.009 "major": 24, 00:07:10.009 "minor": 9, 00:07:10.009 "patch": 0, 00:07:10.009 "suffix": "-pre", 00:07:10.009 "commit": "35c1e586c" 00:07:10.009 } 00:07:10.009 } 00:07:10.009 20:02:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:10.009 20:02:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:10.009 20:02:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:10.009 20:02:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:10.009 20:02:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:10.009 20:02:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:10.009 20:02:07 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.009 20:02:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:10.009 20:02:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:10.009 20:02:07 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.009 20:02:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:10.009 20:02:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:10.009 20:02:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:10.009 20:02:07 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:10.009 20:02:07 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:10.009 20:02:07 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:10.009 20:02:07 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.009 20:02:07 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:10.009 20:02:07 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.009 20:02:07 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:10.009 20:02:07 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.009 20:02:07 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:10.009 20:02:07 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:10.009 20:02:07 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:10.009 request: 00:07:10.009 { 00:07:10.009 "method": "env_dpdk_get_mem_stats", 00:07:10.009 "req_id": 1 00:07:10.009 } 00:07:10.009 Got JSON-RPC error response 00:07:10.009 response: 00:07:10.009 { 00:07:10.009 "code": -32601, 00:07:10.009 "message": "Method not found" 00:07:10.009 } 00:07:10.009 20:02:07 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:10.009 20:02:07 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.009 20:02:07 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.269 20:02:07 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.269 20:02:07 app_cmdline -- app/cmdline.sh@1 -- # killprocess 792483 00:07:10.269 20:02:07 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 792483 ']' 00:07:10.269 20:02:07 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 792483 00:07:10.269 20:02:07 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:10.269 20:02:07 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.269 20:02:07 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 792483 00:07:10.269 20:02:07 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.269 20:02:07 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.269 20:02:07 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 792483' 00:07:10.269 killing process with pid 792483 00:07:10.269 20:02:07 app_cmdline -- common/autotest_common.sh@967 -- # kill 792483 00:07:10.269 20:02:07 app_cmdline -- common/autotest_common.sh@972 -- # wait 792483 00:07:10.269 00:07:10.269 real 0m1.579s 00:07:10.269 user 0m1.915s 00:07:10.269 sys 0m0.401s 00:07:10.530 20:02:07 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.530 20:02:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:10.530 ************************************ 00:07:10.530 END TEST app_cmdline 00:07:10.530 ************************************ 00:07:10.530 20:02:07 -- common/autotest_common.sh@1142 -- # return 0 00:07:10.530 20:02:07 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:10.530 20:02:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.530 20:02:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.530 20:02:07 -- common/autotest_common.sh@10 -- # set +x 00:07:10.530 ************************************ 00:07:10.530 START TEST version 00:07:10.530 ************************************ 00:07:10.530 20:02:07 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:10.530 * Looking for test storage... 00:07:10.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:10.530 20:02:07 version -- app/version.sh@17 -- # get_header_version major 00:07:10.530 20:02:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.530 20:02:07 version -- app/version.sh@14 -- # cut -f2 00:07:10.530 20:02:07 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.530 20:02:07 version -- app/version.sh@17 -- # major=24 00:07:10.530 20:02:07 version -- app/version.sh@18 -- # get_header_version minor 00:07:10.530 20:02:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.530 20:02:07 version -- app/version.sh@14 -- # cut -f2 00:07:10.530 20:02:07 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.530 20:02:07 version -- app/version.sh@18 -- # minor=9 00:07:10.530 20:02:07 version -- app/version.sh@19 -- # get_header_version patch 00:07:10.530 20:02:07 version -- app/version.sh@14 -- # cut -f2 00:07:10.530 20:02:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.530 20:02:07 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.530 20:02:07 version -- app/version.sh@19 -- # patch=0 00:07:10.530 20:02:07 version -- app/version.sh@20 -- # get_header_version suffix 00:07:10.530 20:02:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.530 20:02:07 version -- app/version.sh@14 -- # cut -f2 00:07:10.530 20:02:07 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.530 20:02:07 version -- app/version.sh@20 -- # suffix=-pre 00:07:10.530 20:02:07 version -- app/version.sh@22 -- # version=24.9 00:07:10.530 20:02:07 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:10.530 20:02:07 version -- app/version.sh@28 -- # version=24.9rc0 00:07:10.530 20:02:07 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:10.530 20:02:07 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:10.530 20:02:07 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:10.530 20:02:07 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:10.530 00:07:10.530 real 0m0.171s 00:07:10.530 user 0m0.090s 00:07:10.530 sys 0m0.115s 00:07:10.530 20:02:07 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.530 20:02:07 version -- common/autotest_common.sh@10 -- # set +x 00:07:10.530 ************************************ 00:07:10.530 END TEST version 00:07:10.530 ************************************ 00:07:10.798 20:02:07 -- common/autotest_common.sh@1142 -- # return 0 00:07:10.798 20:02:07 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:10.798 20:02:07 -- spdk/autotest.sh@198 -- # uname -s 00:07:10.798 20:02:07 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:10.798 20:02:07 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:10.798 20:02:07 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:10.798 20:02:07 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:10.798 20:02:07 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:10.798 20:02:07 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:10.798 20:02:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:10.798 20:02:07 -- common/autotest_common.sh@10 -- # set +x 00:07:10.798 20:02:08 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:10.798 20:02:08 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:10.798 20:02:08 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:10.798 20:02:08 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:10.798 20:02:08 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:10.798 20:02:08 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:10.799 20:02:08 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:10.799 20:02:08 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:10.799 20:02:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.799 20:02:08 -- common/autotest_common.sh@10 -- # set +x 00:07:10.799 ************************************ 00:07:10.799 START TEST nvmf_tcp 00:07:10.799 ************************************ 00:07:10.799 20:02:08 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:10.799 * Looking for test storage... 00:07:10.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:10.799 20:02:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:10.799 20:02:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:10.799 20:02:08 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.799 20:02:08 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:10.799 20:02:08 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.799 20:02:08 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.799 20:02:08 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.799 20:02:08 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.800 20:02:08 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.800 20:02:08 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.800 20:02:08 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.800 20:02:08 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.800 20:02:08 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.800 20:02:08 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.800 20:02:08 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:10.800 20:02:08 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.800 20:02:08 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.801 20:02:08 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:10.801 20:02:08 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:10.801 20:02:08 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:10.801 20:02:08 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:10.801 20:02:08 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:10.801 20:02:08 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:10.801 20:02:08 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:10.801 20:02:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:10.801 20:02:08 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:10.801 20:02:08 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:10.801 20:02:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:10.801 20:02:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.801 20:02:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:11.064 ************************************ 00:07:11.064 START TEST nvmf_example 00:07:11.064 ************************************ 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:11.064 * Looking for test storage... 00:07:11.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.064 20:02:08 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:11.065 20:02:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:19.210 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:19.210 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:19.210 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:19.210 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:19.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:07:19.210 00:07:19.210 --- 10.0.0.2 ping statistics --- 00:07:19.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.210 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:07:19.210 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:19.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:07:19.210 00:07:19.210 --- 10.0.0.1 ping statistics --- 00:07:19.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.210 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=796659 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 796659 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 796659 ']' 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.211 20:02:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.211 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:19.211 20:02:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:19.211 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.239 Initializing NVMe Controllers 00:07:29.239 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:29.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:29.239 Initialization complete. Launching workers. 00:07:29.239 ======================================================== 00:07:29.239 Latency(us) 00:07:29.239 Device Information : IOPS MiB/s Average min max 00:07:29.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16413.21 64.11 3898.85 664.64 15278.53 00:07:29.239 ======================================================== 00:07:29.239 Total : 16413.21 64.11 3898.85 664.64 15278.53 00:07:29.239 00:07:29.239 20:02:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:29.239 20:02:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:29.239 20:02:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:29.239 20:02:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:29.239 20:02:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:29.239 20:02:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:29.239 20:02:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:29.239 20:02:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:29.239 rmmod nvme_tcp 00:07:29.239 rmmod nvme_fabrics 00:07:29.239 rmmod nvme_keyring 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 796659 ']' 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 796659 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 796659 ']' 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 796659 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 796659 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 796659' 00:07:29.499 killing process with pid 796659 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 796659 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 796659 00:07:29.499 nvmf threads initialize successfully 00:07:29.499 bdev subsystem init successfully 00:07:29.499 created a nvmf target service 00:07:29.499 create targets's poll groups done 00:07:29.499 all subsystems of target started 00:07:29.499 nvmf target is running 00:07:29.499 all subsystems of target stopped 00:07:29.499 destroy targets's poll groups done 00:07:29.499 destroyed the nvmf target service 00:07:29.499 bdev subsystem finish successfully 00:07:29.499 nvmf threads destroy successfully 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.499 20:02:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.045 20:02:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:32.045 20:02:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:32.045 20:02:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:32.045 20:02:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.045 00:07:32.045 real 0m20.742s 00:07:32.045 user 0m46.254s 00:07:32.045 sys 0m6.329s 00:07:32.045 20:02:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.045 20:02:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.045 ************************************ 00:07:32.045 END TEST nvmf_example 00:07:32.045 ************************************ 00:07:32.045 20:02:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:32.045 20:02:29 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:32.045 20:02:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:32.045 20:02:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.045 20:02:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:32.045 ************************************ 00:07:32.045 START TEST nvmf_filesystem 00:07:32.045 ************************************ 00:07:32.045 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:32.045 * Looking for test storage... 00:07:32.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.045 20:02:29 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:32.045 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:32.045 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:32.045 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:32.045 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:32.045 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:32.045 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:32.045 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:32.045 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:32.045 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:32.045 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:32.045 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:32.045 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:32.045 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:32.046 #define SPDK_CONFIG_H 00:07:32.046 #define SPDK_CONFIG_APPS 1 00:07:32.046 #define SPDK_CONFIG_ARCH native 00:07:32.046 #undef SPDK_CONFIG_ASAN 00:07:32.046 #undef SPDK_CONFIG_AVAHI 00:07:32.046 #undef SPDK_CONFIG_CET 00:07:32.046 #define SPDK_CONFIG_COVERAGE 1 00:07:32.046 #define SPDK_CONFIG_CROSS_PREFIX 00:07:32.046 #undef SPDK_CONFIG_CRYPTO 00:07:32.046 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:32.046 #undef SPDK_CONFIG_CUSTOMOCF 00:07:32.046 #undef SPDK_CONFIG_DAOS 00:07:32.046 #define SPDK_CONFIG_DAOS_DIR 00:07:32.046 #define SPDK_CONFIG_DEBUG 1 00:07:32.046 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:32.046 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:32.046 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:32.046 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:32.046 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:32.046 #undef SPDK_CONFIG_DPDK_UADK 00:07:32.046 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:32.046 #define SPDK_CONFIG_EXAMPLES 1 00:07:32.046 #undef SPDK_CONFIG_FC 00:07:32.046 #define SPDK_CONFIG_FC_PATH 00:07:32.046 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:32.046 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:32.046 #undef SPDK_CONFIG_FUSE 00:07:32.046 #undef SPDK_CONFIG_FUZZER 00:07:32.046 #define SPDK_CONFIG_FUZZER_LIB 00:07:32.046 #undef SPDK_CONFIG_GOLANG 00:07:32.046 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:32.046 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:32.046 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:32.046 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:32.046 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:32.046 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:32.046 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:32.046 #define SPDK_CONFIG_IDXD 1 00:07:32.046 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:32.046 #undef SPDK_CONFIG_IPSEC_MB 00:07:32.046 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:32.046 #define SPDK_CONFIG_ISAL 1 00:07:32.046 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:32.046 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:32.046 #define SPDK_CONFIG_LIBDIR 00:07:32.046 #undef SPDK_CONFIG_LTO 00:07:32.046 #define SPDK_CONFIG_MAX_LCORES 128 00:07:32.046 #define SPDK_CONFIG_NVME_CUSE 1 00:07:32.046 #undef SPDK_CONFIG_OCF 00:07:32.046 #define SPDK_CONFIG_OCF_PATH 00:07:32.046 #define SPDK_CONFIG_OPENSSL_PATH 00:07:32.046 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:32.046 #define SPDK_CONFIG_PGO_DIR 00:07:32.046 #undef SPDK_CONFIG_PGO_USE 00:07:32.046 #define SPDK_CONFIG_PREFIX /usr/local 00:07:32.046 #undef SPDK_CONFIG_RAID5F 00:07:32.046 #undef SPDK_CONFIG_RBD 00:07:32.046 #define SPDK_CONFIG_RDMA 1 00:07:32.046 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:32.046 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:32.046 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:32.046 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:32.046 #define SPDK_CONFIG_SHARED 1 00:07:32.046 #undef SPDK_CONFIG_SMA 00:07:32.046 #define SPDK_CONFIG_TESTS 1 00:07:32.046 #undef SPDK_CONFIG_TSAN 00:07:32.046 #define SPDK_CONFIG_UBLK 1 00:07:32.046 #define SPDK_CONFIG_UBSAN 1 00:07:32.046 #undef SPDK_CONFIG_UNIT_TESTS 00:07:32.046 #undef SPDK_CONFIG_URING 00:07:32.046 #define SPDK_CONFIG_URING_PATH 00:07:32.046 #undef SPDK_CONFIG_URING_ZNS 00:07:32.046 #undef SPDK_CONFIG_USDT 00:07:32.046 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:32.046 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:32.046 #define SPDK_CONFIG_VFIO_USER 1 00:07:32.046 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:32.046 #define SPDK_CONFIG_VHOST 1 00:07:32.046 #define SPDK_CONFIG_VIRTIO 1 00:07:32.046 #undef SPDK_CONFIG_VTUNE 00:07:32.046 #define SPDK_CONFIG_VTUNE_DIR 00:07:32.046 #define SPDK_CONFIG_WERROR 1 00:07:32.046 #define SPDK_CONFIG_WPDK_DIR 00:07:32.046 #undef SPDK_CONFIG_XNVME 00:07:32.046 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:32.046 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:32.047 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 799527 ]] 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 799527 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.WPMzQG 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.WPMzQG/tests/target /tmp/spdk.WPMzQG 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=954236928 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330192896 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=118649290752 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129371013120 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10721722368 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680796160 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864503296 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874202624 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9699328 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=216064 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=287744 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684081152 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1425408 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937097216 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937101312 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:32.048 * Looking for test storage... 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=118649290752 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12936314880 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:32.048 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:32.049 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.049 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.049 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.049 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:32.049 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:32.049 20:02:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:32.049 20:02:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:40.185 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:40.185 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:40.185 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:40.185 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:40.185 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:40.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:07:40.186 00:07:40.186 --- 10.0.0.2 ping statistics --- 00:07:40.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.186 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:40.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.397 ms 00:07:40.186 00:07:40.186 --- 10.0.0.1 ping statistics --- 00:07:40.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.186 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.186 ************************************ 00:07:40.186 START TEST nvmf_filesystem_no_in_capsule 00:07:40.186 ************************************ 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=803291 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 803291 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 803291 ']' 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:40.186 20:02:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.186 [2024-07-15 20:02:36.559120] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:07:40.186 [2024-07-15 20:02:36.559215] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.186 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.186 [2024-07-15 20:02:36.631194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.186 [2024-07-15 20:02:36.708294] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.186 [2024-07-15 20:02:36.708332] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.186 [2024-07-15 20:02:36.708340] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.186 [2024-07-15 20:02:36.708347] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.186 [2024-07-15 20:02:36.708352] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.186 [2024-07-15 20:02:36.708493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.186 [2024-07-15 20:02:36.708612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.186 [2024-07-15 20:02:36.708768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.186 [2024-07-15 20:02:36.708769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.186 [2024-07-15 20:02:37.385753] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.186 Malloc1 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.186 [2024-07-15 20:02:37.514557] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.186 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:40.186 { 00:07:40.186 "name": "Malloc1", 00:07:40.186 "aliases": [ 00:07:40.186 "2d9ea385-92e3-46f5-b874-1b6fe9d1f3d6" 00:07:40.186 ], 00:07:40.186 "product_name": "Malloc disk", 00:07:40.186 "block_size": 512, 00:07:40.186 "num_blocks": 1048576, 00:07:40.186 "uuid": "2d9ea385-92e3-46f5-b874-1b6fe9d1f3d6", 00:07:40.186 "assigned_rate_limits": { 00:07:40.186 "rw_ios_per_sec": 0, 00:07:40.186 "rw_mbytes_per_sec": 0, 00:07:40.186 "r_mbytes_per_sec": 0, 00:07:40.186 "w_mbytes_per_sec": 0 00:07:40.186 }, 00:07:40.186 "claimed": true, 00:07:40.186 "claim_type": "exclusive_write", 00:07:40.186 "zoned": false, 00:07:40.186 "supported_io_types": { 00:07:40.186 "read": true, 00:07:40.186 "write": true, 00:07:40.186 "unmap": true, 00:07:40.186 "flush": true, 00:07:40.186 "reset": true, 00:07:40.186 "nvme_admin": false, 00:07:40.186 "nvme_io": false, 00:07:40.186 "nvme_io_md": false, 00:07:40.186 "write_zeroes": true, 00:07:40.186 "zcopy": true, 00:07:40.186 "get_zone_info": false, 00:07:40.186 "zone_management": false, 00:07:40.186 "zone_append": false, 00:07:40.186 "compare": false, 00:07:40.186 "compare_and_write": false, 00:07:40.186 "abort": true, 00:07:40.187 "seek_hole": false, 00:07:40.187 "seek_data": false, 00:07:40.187 "copy": true, 00:07:40.187 "nvme_iov_md": false 00:07:40.187 }, 00:07:40.187 "memory_domains": [ 00:07:40.187 { 00:07:40.187 "dma_device_id": "system", 00:07:40.187 "dma_device_type": 1 00:07:40.187 }, 00:07:40.187 { 00:07:40.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.187 "dma_device_type": 2 00:07:40.187 } 00:07:40.187 ], 00:07:40.187 "driver_specific": {} 00:07:40.187 } 00:07:40.187 ]' 00:07:40.187 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:40.187 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:40.187 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:40.447 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:40.447 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:40.447 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:40.447 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:40.447 20:02:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:41.833 20:02:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:41.833 20:02:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:41.834 20:02:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:41.834 20:02:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:41.834 20:02:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:44.377 20:02:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:44.377 20:02:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:44.377 20:02:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:44.377 20:02:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:44.377 20:02:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:44.377 20:02:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:44.377 20:02:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:44.377 20:02:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:44.377 20:02:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:44.377 20:02:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:44.377 20:02:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:44.377 20:02:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:44.377 20:02:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:44.377 20:02:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:44.377 20:02:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:44.377 20:02:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:44.377 20:02:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:44.377 20:02:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:44.950 20:02:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:45.893 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:45.893 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:45.893 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:45.893 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.893 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.893 ************************************ 00:07:45.893 START TEST filesystem_ext4 00:07:45.893 ************************************ 00:07:45.893 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:45.893 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:45.893 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:45.894 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:45.894 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:45.894 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:45.894 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:45.894 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:45.894 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:45.894 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:45.894 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:45.894 mke2fs 1.46.5 (30-Dec-2021) 00:07:45.894 Discarding device blocks: 0/522240 done 00:07:45.894 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:45.894 Filesystem UUID: c9d4b42c-08d9-441e-a35f-7164eaafec95 00:07:45.894 Superblock backups stored on blocks: 00:07:45.894 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:45.894 00:07:45.894 Allocating group tables: 0/64 done 00:07:46.154 Writing inode tables: 0/64 done 00:07:46.154 Creating journal (8192 blocks): done 00:07:46.154 Writing superblocks and filesystem accounting information: 0/64 done 00:07:46.154 00:07:46.154 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:46.154 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:46.414 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:46.414 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:46.414 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:46.414 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:46.414 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:46.414 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:46.414 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 803291 00:07:46.414 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:46.414 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:46.414 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:46.414 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:46.414 00:07:46.414 real 0m0.514s 00:07:46.414 user 0m0.026s 00:07:46.414 sys 0m0.068s 00:07:46.414 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.414 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:46.414 ************************************ 00:07:46.414 END TEST filesystem_ext4 00:07:46.414 ************************************ 00:07:46.414 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:46.414 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:46.414 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:46.414 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.414 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:46.675 ************************************ 00:07:46.675 START TEST filesystem_btrfs 00:07:46.675 ************************************ 00:07:46.675 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:46.675 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:46.675 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:46.675 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:46.675 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:46.675 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:46.675 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:46.675 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:46.675 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:46.675 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:46.675 20:02:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:46.936 btrfs-progs v6.6.2 00:07:46.936 See https://btrfs.readthedocs.io for more information. 00:07:46.936 00:07:46.936 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:46.936 NOTE: several default settings have changed in version 5.15, please make sure 00:07:46.936 this does not affect your deployments: 00:07:46.936 - DUP for metadata (-m dup) 00:07:46.936 - enabled no-holes (-O no-holes) 00:07:46.936 - enabled free-space-tree (-R free-space-tree) 00:07:46.936 00:07:46.936 Label: (null) 00:07:46.936 UUID: e559b012-9508-4c24-ab5c-2d98a571b5f2 00:07:46.936 Node size: 16384 00:07:46.936 Sector size: 4096 00:07:46.936 Filesystem size: 510.00MiB 00:07:46.936 Block group profiles: 00:07:46.936 Data: single 8.00MiB 00:07:46.936 Metadata: DUP 32.00MiB 00:07:46.936 System: DUP 8.00MiB 00:07:46.936 SSD detected: yes 00:07:46.936 Zoned device: no 00:07:46.936 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:46.936 Runtime features: free-space-tree 00:07:46.936 Checksum: crc32c 00:07:46.936 Number of devices: 1 00:07:46.936 Devices: 00:07:46.936 ID SIZE PATH 00:07:46.936 1 510.00MiB /dev/nvme0n1p1 00:07:46.936 00:07:46.936 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:46.936 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 803291 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:47.202 00:07:47.202 real 0m0.599s 00:07:47.202 user 0m0.029s 00:07:47.202 sys 0m0.130s 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:47.202 ************************************ 00:07:47.202 END TEST filesystem_btrfs 00:07:47.202 ************************************ 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.202 ************************************ 00:07:47.202 START TEST filesystem_xfs 00:07:47.202 ************************************ 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:47.202 20:02:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:47.202 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:47.202 = sectsz=512 attr=2, projid32bit=1 00:07:47.202 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:47.202 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:47.202 data = bsize=4096 blocks=130560, imaxpct=25 00:07:47.202 = sunit=0 swidth=0 blks 00:07:47.202 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:47.202 log =internal log bsize=4096 blocks=16384, version=2 00:07:47.202 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:47.202 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:48.588 Discarding blocks...Done. 00:07:48.588 20:02:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:48.588 20:02:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.503 20:02:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.503 20:02:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:50.503 20:02:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.503 20:02:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:50.503 20:02:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:50.503 20:02:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.503 20:02:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 803291 00:07:50.503 20:02:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.503 20:02:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.503 20:02:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.503 20:02:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.503 00:07:50.503 real 0m2.998s 00:07:50.503 user 0m0.024s 00:07:50.503 sys 0m0.079s 00:07:50.503 20:02:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.503 20:02:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:50.503 ************************************ 00:07:50.503 END TEST filesystem_xfs 00:07:50.503 ************************************ 00:07:50.503 20:02:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:50.503 20:02:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:50.765 20:02:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:50.765 20:02:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:50.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 803291 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 803291 ']' 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 803291 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 803291 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 803291' 00:07:50.765 killing process with pid 803291 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 803291 00:07:50.765 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 803291 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:51.026 00:07:51.026 real 0m11.856s 00:07:51.026 user 0m46.663s 00:07:51.026 sys 0m1.194s 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.026 ************************************ 00:07:51.026 END TEST nvmf_filesystem_no_in_capsule 00:07:51.026 ************************************ 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.026 ************************************ 00:07:51.026 START TEST nvmf_filesystem_in_capsule 00:07:51.026 ************************************ 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=805881 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 805881 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 805881 ']' 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.026 20:02:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.288 [2024-07-15 20:02:48.488968] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:07:51.288 [2024-07-15 20:02:48.489021] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.288 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.288 [2024-07-15 20:02:48.557905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.288 [2024-07-15 20:02:48.631727] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.288 [2024-07-15 20:02:48.631763] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.288 [2024-07-15 20:02:48.631770] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.288 [2024-07-15 20:02:48.631777] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.288 [2024-07-15 20:02:48.631782] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.288 [2024-07-15 20:02:48.631923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.288 [2024-07-15 20:02:48.632038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.288 [2024-07-15 20:02:48.632187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.288 [2024-07-15 20:02:48.632188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.897 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.897 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:51.897 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:51.897 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.897 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.897 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.897 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:51.897 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:51.897 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.897 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.897 [2024-07-15 20:02:49.314770] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.897 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.897 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:51.898 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.898 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.159 Malloc1 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.159 [2024-07-15 20:02:49.441459] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:52.159 { 00:07:52.159 "name": "Malloc1", 00:07:52.159 "aliases": [ 00:07:52.159 "8c7640b5-8646-4d9a-bd6d-e604f0c6d021" 00:07:52.159 ], 00:07:52.159 "product_name": "Malloc disk", 00:07:52.159 "block_size": 512, 00:07:52.159 "num_blocks": 1048576, 00:07:52.159 "uuid": "8c7640b5-8646-4d9a-bd6d-e604f0c6d021", 00:07:52.159 "assigned_rate_limits": { 00:07:52.159 "rw_ios_per_sec": 0, 00:07:52.159 "rw_mbytes_per_sec": 0, 00:07:52.159 "r_mbytes_per_sec": 0, 00:07:52.159 "w_mbytes_per_sec": 0 00:07:52.159 }, 00:07:52.159 "claimed": true, 00:07:52.159 "claim_type": "exclusive_write", 00:07:52.159 "zoned": false, 00:07:52.159 "supported_io_types": { 00:07:52.159 "read": true, 00:07:52.159 "write": true, 00:07:52.159 "unmap": true, 00:07:52.159 "flush": true, 00:07:52.159 "reset": true, 00:07:52.159 "nvme_admin": false, 00:07:52.159 "nvme_io": false, 00:07:52.159 "nvme_io_md": false, 00:07:52.159 "write_zeroes": true, 00:07:52.159 "zcopy": true, 00:07:52.159 "get_zone_info": false, 00:07:52.159 "zone_management": false, 00:07:52.159 "zone_append": false, 00:07:52.159 "compare": false, 00:07:52.159 "compare_and_write": false, 00:07:52.159 "abort": true, 00:07:52.159 "seek_hole": false, 00:07:52.159 "seek_data": false, 00:07:52.159 "copy": true, 00:07:52.159 "nvme_iov_md": false 00:07:52.159 }, 00:07:52.159 "memory_domains": [ 00:07:52.159 { 00:07:52.159 "dma_device_id": "system", 00:07:52.159 "dma_device_type": 1 00:07:52.159 }, 00:07:52.159 { 00:07:52.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.159 "dma_device_type": 2 00:07:52.159 } 00:07:52.159 ], 00:07:52.159 "driver_specific": {} 00:07:52.159 } 00:07:52.159 ]' 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:52.159 20:02:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:54.073 20:02:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:54.073 20:02:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:54.073 20:02:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:54.073 20:02:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:54.073 20:02:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:55.993 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:55.993 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:55.993 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:55.993 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:55.993 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:55.993 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:55.993 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:55.993 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:55.993 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:55.993 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:55.993 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:55.993 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:55.993 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:55.993 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:55.993 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:55.993 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:55.993 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:55.993 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:56.253 20:02:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:57.193 20:02:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:57.193 20:02:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:57.193 20:02:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:57.193 20:02:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.193 20:02:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.193 ************************************ 00:07:57.193 START TEST filesystem_in_capsule_ext4 00:07:57.193 ************************************ 00:07:57.193 20:02:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:57.193 20:02:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:57.193 20:02:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:57.193 20:02:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:57.193 20:02:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:57.193 20:02:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:57.193 20:02:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:57.193 20:02:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:57.193 20:02:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:57.193 20:02:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:57.193 20:02:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:57.193 mke2fs 1.46.5 (30-Dec-2021) 00:07:57.193 Discarding device blocks: 0/522240 done 00:07:57.193 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:57.193 Filesystem UUID: 5ac8c35b-8a31-4d2c-8472-a8b72e02f429 00:07:57.193 Superblock backups stored on blocks: 00:07:57.193 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:57.193 00:07:57.193 Allocating group tables: 0/64 done 00:07:57.193 Writing inode tables: 0/64 done 00:07:57.714 Creating journal (8192 blocks): done 00:07:58.670 Writing superblocks and filesystem accounting information: 0/64 done 00:07:58.670 00:07:58.670 20:02:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:58.670 20:02:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 805881 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.614 00:07:59.614 real 0m2.410s 00:07:59.614 user 0m0.024s 00:07:59.614 sys 0m0.074s 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:59.614 ************************************ 00:07:59.614 END TEST filesystem_in_capsule_ext4 00:07:59.614 ************************************ 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.614 ************************************ 00:07:59.614 START TEST filesystem_in_capsule_btrfs 00:07:59.614 ************************************ 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:59.614 20:02:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:59.874 btrfs-progs v6.6.2 00:07:59.874 See https://btrfs.readthedocs.io for more information. 00:07:59.874 00:07:59.874 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:59.874 NOTE: several default settings have changed in version 5.15, please make sure 00:07:59.874 this does not affect your deployments: 00:07:59.874 - DUP for metadata (-m dup) 00:07:59.874 - enabled no-holes (-O no-holes) 00:07:59.874 - enabled free-space-tree (-R free-space-tree) 00:07:59.874 00:07:59.874 Label: (null) 00:07:59.874 UUID: 9d784f5d-bd3a-4d28-9395-e3050ac66bf2 00:07:59.874 Node size: 16384 00:07:59.874 Sector size: 4096 00:07:59.874 Filesystem size: 510.00MiB 00:07:59.874 Block group profiles: 00:07:59.874 Data: single 8.00MiB 00:07:59.874 Metadata: DUP 32.00MiB 00:07:59.874 System: DUP 8.00MiB 00:07:59.874 SSD detected: yes 00:07:59.874 Zoned device: no 00:07:59.874 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:59.874 Runtime features: free-space-tree 00:07:59.874 Checksum: crc32c 00:07:59.874 Number of devices: 1 00:07:59.874 Devices: 00:07:59.874 ID SIZE PATH 00:07:59.874 1 510.00MiB /dev/nvme0n1p1 00:07:59.874 00:07:59.874 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:59.874 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:00.135 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:00.135 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:00.135 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:00.135 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 805881 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:00.396 00:08:00.396 real 0m0.664s 00:08:00.396 user 0m0.028s 00:08:00.396 sys 0m0.135s 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:00.396 ************************************ 00:08:00.396 END TEST filesystem_in_capsule_btrfs 00:08:00.396 ************************************ 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.396 ************************************ 00:08:00.396 START TEST filesystem_in_capsule_xfs 00:08:00.396 ************************************ 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:00.396 20:02:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:00.396 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:00.396 = sectsz=512 attr=2, projid32bit=1 00:08:00.396 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:00.396 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:00.396 data = bsize=4096 blocks=130560, imaxpct=25 00:08:00.396 = sunit=0 swidth=0 blks 00:08:00.396 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:00.396 log =internal log bsize=4096 blocks=16384, version=2 00:08:00.396 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:00.396 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:01.337 Discarding blocks...Done. 00:08:01.337 20:02:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:01.337 20:02:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:03.882 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:03.882 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:03.882 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:03.882 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:03.882 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:03.882 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:03.882 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 805881 00:08:03.882 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:03.882 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:03.882 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:03.882 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:03.882 00:08:03.882 real 0m3.472s 00:08:03.882 user 0m0.020s 00:08:03.882 sys 0m0.083s 00:08:03.882 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.882 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:03.882 ************************************ 00:08:03.882 END TEST filesystem_in_capsule_xfs 00:08:03.882 ************************************ 00:08:03.882 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:03.883 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:04.143 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:04.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 805881 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 805881 ']' 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 805881 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 805881 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 805881' 00:08:04.144 killing process with pid 805881 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 805881 00:08:04.144 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 805881 00:08:04.405 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:04.405 00:08:04.405 real 0m13.329s 00:08:04.405 user 0m52.527s 00:08:04.405 sys 0m1.243s 00:08:04.405 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.405 20:03:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.405 ************************************ 00:08:04.405 END TEST nvmf_filesystem_in_capsule 00:08:04.405 ************************************ 00:08:04.405 20:03:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:04.405 20:03:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:04.405 20:03:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:04.405 20:03:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:04.405 20:03:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:04.405 20:03:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:04.405 20:03:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:04.405 20:03:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:04.405 rmmod nvme_tcp 00:08:04.405 rmmod nvme_fabrics 00:08:04.405 rmmod nvme_keyring 00:08:04.667 20:03:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:04.667 20:03:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:04.667 20:03:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:04.667 20:03:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:04.667 20:03:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:04.667 20:03:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:04.667 20:03:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:04.667 20:03:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:04.667 20:03:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:04.667 20:03:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.667 20:03:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:04.667 20:03:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.575 20:03:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:06.575 00:08:06.575 real 0m34.861s 00:08:06.575 user 1m41.344s 00:08:06.575 sys 0m7.888s 00:08:06.575 20:03:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.575 20:03:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.576 ************************************ 00:08:06.576 END TEST nvmf_filesystem 00:08:06.576 ************************************ 00:08:06.576 20:03:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:06.576 20:03:03 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:06.576 20:03:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:06.576 20:03:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.576 20:03:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:06.576 ************************************ 00:08:06.576 START TEST nvmf_target_discovery 00:08:06.576 ************************************ 00:08:06.576 20:03:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:06.837 * Looking for test storage... 00:08:06.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.837 20:03:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.838 20:03:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.838 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:06.838 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:06.838 20:03:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:06.838 20:03:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:14.984 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:14.984 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:14.984 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:14.984 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.984 20:03:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.984 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.984 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.984 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:14.984 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.984 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.984 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.984 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:14.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:08:14.984 00:08:14.984 --- 10.0.0.2 ping statistics --- 00:08:14.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.984 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:08:14.984 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:08:14.984 00:08:14.984 --- 10.0.0.1 ping statistics --- 00:08:14.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.984 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=813354 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 813354 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 813354 ']' 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:14.985 20:03:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 [2024-07-15 20:03:11.366455] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:08:14.985 [2024-07-15 20:03:11.366513] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.985 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.985 [2024-07-15 20:03:11.438357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.985 [2024-07-15 20:03:11.511626] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.985 [2024-07-15 20:03:11.511662] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.985 [2024-07-15 20:03:11.511670] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.985 [2024-07-15 20:03:11.511677] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.985 [2024-07-15 20:03:11.511682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.985 [2024-07-15 20:03:11.511821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.985 [2024-07-15 20:03:11.511929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.985 [2024-07-15 20:03:11.512089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.985 [2024-07-15 20:03:11.512090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 [2024-07-15 20:03:12.191836] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 Null1 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 [2024-07-15 20:03:12.252190] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 Null2 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 Null3 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 Null4 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.985 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:14.986 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.986 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.986 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.986 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:14.986 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.986 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.247 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.247 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:08:15.247 00:08:15.247 Discovery Log Number of Records 6, Generation counter 6 00:08:15.247 =====Discovery Log Entry 0====== 00:08:15.247 trtype: tcp 00:08:15.247 adrfam: ipv4 00:08:15.247 subtype: current discovery subsystem 00:08:15.247 treq: not required 00:08:15.247 portid: 0 00:08:15.247 trsvcid: 4420 00:08:15.247 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:15.247 traddr: 10.0.0.2 00:08:15.247 eflags: explicit discovery connections, duplicate discovery information 00:08:15.247 sectype: none 00:08:15.247 =====Discovery Log Entry 1====== 00:08:15.247 trtype: tcp 00:08:15.247 adrfam: ipv4 00:08:15.247 subtype: nvme subsystem 00:08:15.247 treq: not required 00:08:15.247 portid: 0 00:08:15.247 trsvcid: 4420 00:08:15.247 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:15.247 traddr: 10.0.0.2 00:08:15.247 eflags: none 00:08:15.247 sectype: none 00:08:15.247 =====Discovery Log Entry 2====== 00:08:15.247 trtype: tcp 00:08:15.247 adrfam: ipv4 00:08:15.247 subtype: nvme subsystem 00:08:15.247 treq: not required 00:08:15.247 portid: 0 00:08:15.247 trsvcid: 4420 00:08:15.247 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:15.247 traddr: 10.0.0.2 00:08:15.247 eflags: none 00:08:15.247 sectype: none 00:08:15.247 =====Discovery Log Entry 3====== 00:08:15.247 trtype: tcp 00:08:15.247 adrfam: ipv4 00:08:15.247 subtype: nvme subsystem 00:08:15.247 treq: not required 00:08:15.247 portid: 0 00:08:15.247 trsvcid: 4420 00:08:15.247 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:15.247 traddr: 10.0.0.2 00:08:15.247 eflags: none 00:08:15.247 sectype: none 00:08:15.247 =====Discovery Log Entry 4====== 00:08:15.247 trtype: tcp 00:08:15.247 adrfam: ipv4 00:08:15.247 subtype: nvme subsystem 00:08:15.247 treq: not required 00:08:15.247 portid: 0 00:08:15.247 trsvcid: 4420 00:08:15.247 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:15.247 traddr: 10.0.0.2 00:08:15.247 eflags: none 00:08:15.247 sectype: none 00:08:15.247 =====Discovery Log Entry 5====== 00:08:15.247 trtype: tcp 00:08:15.247 adrfam: ipv4 00:08:15.247 subtype: discovery subsystem referral 00:08:15.247 treq: not required 00:08:15.247 portid: 0 00:08:15.247 trsvcid: 4430 00:08:15.247 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:15.247 traddr: 10.0.0.2 00:08:15.247 eflags: none 00:08:15.247 sectype: none 00:08:15.247 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:15.247 Perform nvmf subsystem discovery via RPC 00:08:15.247 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:15.247 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.247 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.247 [ 00:08:15.247 { 00:08:15.247 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:15.247 "subtype": "Discovery", 00:08:15.247 "listen_addresses": [ 00:08:15.247 { 00:08:15.247 "trtype": "TCP", 00:08:15.247 "adrfam": "IPv4", 00:08:15.247 "traddr": "10.0.0.2", 00:08:15.247 "trsvcid": "4420" 00:08:15.247 } 00:08:15.247 ], 00:08:15.247 "allow_any_host": true, 00:08:15.247 "hosts": [] 00:08:15.247 }, 00:08:15.247 { 00:08:15.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.247 "subtype": "NVMe", 00:08:15.247 "listen_addresses": [ 00:08:15.247 { 00:08:15.247 "trtype": "TCP", 00:08:15.247 "adrfam": "IPv4", 00:08:15.247 "traddr": "10.0.0.2", 00:08:15.247 "trsvcid": "4420" 00:08:15.247 } 00:08:15.247 ], 00:08:15.247 "allow_any_host": true, 00:08:15.247 "hosts": [], 00:08:15.247 "serial_number": "SPDK00000000000001", 00:08:15.247 "model_number": "SPDK bdev Controller", 00:08:15.247 "max_namespaces": 32, 00:08:15.247 "min_cntlid": 1, 00:08:15.247 "max_cntlid": 65519, 00:08:15.247 "namespaces": [ 00:08:15.247 { 00:08:15.247 "nsid": 1, 00:08:15.247 "bdev_name": "Null1", 00:08:15.247 "name": "Null1", 00:08:15.247 "nguid": "D31A6F2BBFD54D129864D9230A1B3C93", 00:08:15.247 "uuid": "d31a6f2b-bfd5-4d12-9864-d9230a1b3c93" 00:08:15.247 } 00:08:15.247 ] 00:08:15.247 }, 00:08:15.247 { 00:08:15.247 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:15.247 "subtype": "NVMe", 00:08:15.247 "listen_addresses": [ 00:08:15.247 { 00:08:15.247 "trtype": "TCP", 00:08:15.247 "adrfam": "IPv4", 00:08:15.247 "traddr": "10.0.0.2", 00:08:15.247 "trsvcid": "4420" 00:08:15.247 } 00:08:15.247 ], 00:08:15.247 "allow_any_host": true, 00:08:15.247 "hosts": [], 00:08:15.247 "serial_number": "SPDK00000000000002", 00:08:15.247 "model_number": "SPDK bdev Controller", 00:08:15.247 "max_namespaces": 32, 00:08:15.247 "min_cntlid": 1, 00:08:15.247 "max_cntlid": 65519, 00:08:15.247 "namespaces": [ 00:08:15.247 { 00:08:15.247 "nsid": 1, 00:08:15.247 "bdev_name": "Null2", 00:08:15.247 "name": "Null2", 00:08:15.247 "nguid": "D248EA20E03342F4AFBD33F4C25B54A0", 00:08:15.247 "uuid": "d248ea20-e033-42f4-afbd-33f4c25b54a0" 00:08:15.247 } 00:08:15.247 ] 00:08:15.247 }, 00:08:15.247 { 00:08:15.247 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:15.247 "subtype": "NVMe", 00:08:15.247 "listen_addresses": [ 00:08:15.247 { 00:08:15.247 "trtype": "TCP", 00:08:15.247 "adrfam": "IPv4", 00:08:15.247 "traddr": "10.0.0.2", 00:08:15.247 "trsvcid": "4420" 00:08:15.247 } 00:08:15.247 ], 00:08:15.247 "allow_any_host": true, 00:08:15.248 "hosts": [], 00:08:15.248 "serial_number": "SPDK00000000000003", 00:08:15.248 "model_number": "SPDK bdev Controller", 00:08:15.248 "max_namespaces": 32, 00:08:15.248 "min_cntlid": 1, 00:08:15.248 "max_cntlid": 65519, 00:08:15.248 "namespaces": [ 00:08:15.248 { 00:08:15.248 "nsid": 1, 00:08:15.248 "bdev_name": "Null3", 00:08:15.248 "name": "Null3", 00:08:15.248 "nguid": "FB2B1EA292A34189B62F26976E012239", 00:08:15.248 "uuid": "fb2b1ea2-92a3-4189-b62f-26976e012239" 00:08:15.248 } 00:08:15.248 ] 00:08:15.248 }, 00:08:15.248 { 00:08:15.248 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:15.248 "subtype": "NVMe", 00:08:15.248 "listen_addresses": [ 00:08:15.248 { 00:08:15.248 "trtype": "TCP", 00:08:15.248 "adrfam": "IPv4", 00:08:15.248 "traddr": "10.0.0.2", 00:08:15.248 "trsvcid": "4420" 00:08:15.248 } 00:08:15.248 ], 00:08:15.248 "allow_any_host": true, 00:08:15.248 "hosts": [], 00:08:15.248 "serial_number": "SPDK00000000000004", 00:08:15.248 "model_number": "SPDK bdev Controller", 00:08:15.248 "max_namespaces": 32, 00:08:15.248 "min_cntlid": 1, 00:08:15.248 "max_cntlid": 65519, 00:08:15.248 "namespaces": [ 00:08:15.248 { 00:08:15.248 "nsid": 1, 00:08:15.248 "bdev_name": "Null4", 00:08:15.248 "name": "Null4", 00:08:15.248 "nguid": "78880F30AC9E4B33913D1F609355B523", 00:08:15.248 "uuid": "78880f30-ac9e-4b33-913d-1f609355b523" 00:08:15.248 } 00:08:15.248 ] 00:08:15.248 } 00:08:15.248 ] 00:08:15.248 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.248 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:15.248 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:15.248 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:15.248 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.248 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:15.509 rmmod nvme_tcp 00:08:15.509 rmmod nvme_fabrics 00:08:15.509 rmmod nvme_keyring 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 813354 ']' 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 813354 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 813354 ']' 00:08:15.509 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 813354 00:08:15.510 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:15.510 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:15.510 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 813354 00:08:15.510 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:15.510 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:15.510 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 813354' 00:08:15.510 killing process with pid 813354 00:08:15.510 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 813354 00:08:15.510 20:03:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 813354 00:08:15.771 20:03:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:15.771 20:03:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:15.771 20:03:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:15.771 20:03:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.771 20:03:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:15.771 20:03:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.771 20:03:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.771 20:03:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.741 20:03:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:17.741 00:08:17.741 real 0m11.120s 00:08:17.741 user 0m8.343s 00:08:17.741 sys 0m5.697s 00:08:17.741 20:03:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.741 20:03:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.741 ************************************ 00:08:17.741 END TEST nvmf_target_discovery 00:08:17.741 ************************************ 00:08:17.741 20:03:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:17.741 20:03:15 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:17.741 20:03:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:17.741 20:03:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.741 20:03:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:18.001 ************************************ 00:08:18.001 START TEST nvmf_referrals 00:08:18.001 ************************************ 00:08:18.001 20:03:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:18.001 * Looking for test storage... 00:08:18.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.001 20:03:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.001 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:18.001 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.001 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.001 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.001 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.001 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.001 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:18.002 20:03:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:24.586 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:24.586 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:24.586 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:24.586 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:24.586 20:03:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:24.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:08:24.849 00:08:24.849 --- 10.0.0.2 ping statistics --- 00:08:24.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.849 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:24.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:08:24.849 00:08:24.849 --- 10.0.0.1 ping statistics --- 00:08:24.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.849 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=817742 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 817742 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 817742 ']' 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:24.849 20:03:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.110 [2024-07-15 20:03:22.289557] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:08:25.110 [2024-07-15 20:03:22.289621] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.110 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.110 [2024-07-15 20:03:22.363677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.110 [2024-07-15 20:03:22.440453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.110 [2024-07-15 20:03:22.440488] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.110 [2024-07-15 20:03:22.440497] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.110 [2024-07-15 20:03:22.440504] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.110 [2024-07-15 20:03:22.440509] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.110 [2024-07-15 20:03:22.440686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.110 [2024-07-15 20:03:22.440789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.110 [2024-07-15 20:03:22.440958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.110 [2024-07-15 20:03:22.440959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.682 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.682 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:25.682 20:03:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:25.682 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:25.682 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.682 20:03:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.682 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:25.682 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.682 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.945 [2024-07-15 20:03:23.120807] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.945 [2024-07-15 20:03:23.137009] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.945 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:26.209 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.470 20:03:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:26.730 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:26.730 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:26.730 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:26.730 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:26.730 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.730 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:26.730 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:26.731 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:26.731 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.731 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.731 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.731 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:26.731 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:26.731 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.731 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:26.731 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.731 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:26.731 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.731 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.991 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:26.991 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:26.991 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:26.991 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:26.991 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:26.991 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.991 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:26.991 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:26.991 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:26.991 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:26.991 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:26.991 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:26.991 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:26.991 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.991 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:27.252 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:27.513 rmmod nvme_tcp 00:08:27.513 rmmod nvme_fabrics 00:08:27.513 rmmod nvme_keyring 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 817742 ']' 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 817742 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 817742 ']' 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 817742 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 817742 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 817742' 00:08:27.513 killing process with pid 817742 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 817742 00:08:27.513 20:03:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 817742 00:08:27.774 20:03:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:27.774 20:03:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:27.774 20:03:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:27.774 20:03:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.774 20:03:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:27.774 20:03:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.774 20:03:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.774 20:03:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.686 20:03:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.686 00:08:29.686 real 0m11.873s 00:08:29.686 user 0m13.137s 00:08:29.686 sys 0m5.776s 00:08:29.686 20:03:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.686 20:03:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.686 ************************************ 00:08:29.686 END TEST nvmf_referrals 00:08:29.686 ************************************ 00:08:29.686 20:03:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:29.686 20:03:27 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:29.686 20:03:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:29.686 20:03:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.686 20:03:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.947 ************************************ 00:08:29.947 START TEST nvmf_connect_disconnect 00:08:29.947 ************************************ 00:08:29.947 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:29.947 * Looking for test storage... 00:08:29.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:29.948 20:03:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.091 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:38.092 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:38.092 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:38.092 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:38.092 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:38.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:08:38.092 00:08:38.092 --- 10.0.0.2 ping statistics --- 00:08:38.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.092 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:08:38.092 00:08:38.092 --- 10.0.0.1 ping statistics --- 00:08:38.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.092 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=822508 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 822508 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 822508 ']' 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.092 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.093 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.093 20:03:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.093 [2024-07-15 20:03:34.424495] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:08:38.093 [2024-07-15 20:03:34.424554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.093 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.093 [2024-07-15 20:03:34.495838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.093 [2024-07-15 20:03:34.566291] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.093 [2024-07-15 20:03:34.566329] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.093 [2024-07-15 20:03:34.566341] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.093 [2024-07-15 20:03:34.566347] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.093 [2024-07-15 20:03:34.566353] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.093 [2024-07-15 20:03:34.566490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.093 [2024-07-15 20:03:34.566603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.093 [2024-07-15 20:03:34.566760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.093 [2024-07-15 20:03:34.566761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.093 [2024-07-15 20:03:35.238769] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.093 [2024-07-15 20:03:35.298161] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:38.093 20:03:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:42.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.445 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:56.445 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:56.445 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:56.445 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:56.445 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:56.445 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:56.445 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:56.445 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:56.445 rmmod nvme_tcp 00:08:56.445 rmmod nvme_fabrics 00:08:56.446 rmmod nvme_keyring 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 822508 ']' 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 822508 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 822508 ']' 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 822508 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 822508 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 822508' 00:08:56.446 killing process with pid 822508 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 822508 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 822508 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.446 20:03:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.992 20:03:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:58.992 00:08:58.992 real 0m28.757s 00:08:58.992 user 1m18.816s 00:08:58.992 sys 0m6.445s 00:08:58.992 20:03:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.992 20:03:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:58.992 ************************************ 00:08:58.992 END TEST nvmf_connect_disconnect 00:08:58.992 ************************************ 00:08:58.992 20:03:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:58.992 20:03:55 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:58.992 20:03:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:58.992 20:03:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.992 20:03:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:58.992 ************************************ 00:08:58.992 START TEST nvmf_multitarget 00:08:58.992 ************************************ 00:08:58.992 20:03:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:58.992 * Looking for test storage... 00:08:58.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:58.992 20:03:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:05.581 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:05.581 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:05.581 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:05.581 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.581 20:04:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.581 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.581 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:05.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:09:05.843 00:09:05.843 --- 10.0.0.2 ping statistics --- 00:09:05.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.843 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:09:05.843 00:09:05.843 --- 10.0.0.1 ping statistics --- 00:09:05.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.843 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=830623 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 830623 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 830623 ']' 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:05.843 20:04:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:05.843 [2024-07-15 20:04:03.275310] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:09:05.843 [2024-07-15 20:04:03.275359] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.103 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.103 [2024-07-15 20:04:03.341045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.103 [2024-07-15 20:04:03.406433] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.103 [2024-07-15 20:04:03.406470] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.103 [2024-07-15 20:04:03.406477] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.103 [2024-07-15 20:04:03.406484] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.103 [2024-07-15 20:04:03.406489] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.103 [2024-07-15 20:04:03.406657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.103 [2024-07-15 20:04:03.406771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.103 [2024-07-15 20:04:03.406930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.103 [2024-07-15 20:04:03.406931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.675 20:04:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.675 20:04:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:09:06.675 20:04:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.675 20:04:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:06.675 20:04:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:06.675 20:04:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.675 20:04:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:06.675 20:04:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:06.675 20:04:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:06.935 20:04:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:06.935 20:04:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:06.935 "nvmf_tgt_1" 00:09:06.936 20:04:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:06.936 "nvmf_tgt_2" 00:09:07.195 20:04:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:07.195 20:04:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:07.195 20:04:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:07.195 20:04:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:07.195 true 00:09:07.195 20:04:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:07.456 true 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:07.456 rmmod nvme_tcp 00:09:07.456 rmmod nvme_fabrics 00:09:07.456 rmmod nvme_keyring 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 830623 ']' 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 830623 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 830623 ']' 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 830623 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:07.456 20:04:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 830623 00:09:07.717 20:04:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:07.717 20:04:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:07.717 20:04:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 830623' 00:09:07.717 killing process with pid 830623 00:09:07.717 20:04:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 830623 00:09:07.717 20:04:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 830623 00:09:07.717 20:04:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:07.717 20:04:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:07.717 20:04:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:07.717 20:04:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:07.717 20:04:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:07.717 20:04:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.717 20:04:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.717 20:04:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.260 20:04:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:10.260 00:09:10.260 real 0m11.110s 00:09:10.260 user 0m9.200s 00:09:10.260 sys 0m5.677s 00:09:10.261 20:04:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.261 20:04:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:10.261 ************************************ 00:09:10.261 END TEST nvmf_multitarget 00:09:10.261 ************************************ 00:09:10.261 20:04:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:10.261 20:04:07 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:10.261 20:04:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:10.261 20:04:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.261 20:04:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:10.261 ************************************ 00:09:10.261 START TEST nvmf_rpc 00:09:10.261 ************************************ 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:10.261 * Looking for test storage... 00:09:10.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:10.261 20:04:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.844 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:16.845 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:16.845 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:16.845 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:16.845 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:16.845 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:17.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:09:17.106 00:09:17.106 --- 10.0.0.2 ping statistics --- 00:09:17.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.106 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:09:17.106 00:09:17.106 --- 10.0.0.1 ping statistics --- 00:09:17.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.106 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=835071 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 835071 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 835071 ']' 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:17.106 20:04:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.106 [2024-07-15 20:04:14.422910] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:09:17.107 [2024-07-15 20:04:14.422981] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.107 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.107 [2024-07-15 20:04:14.497463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.367 [2024-07-15 20:04:14.573681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.367 [2024-07-15 20:04:14.573719] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.367 [2024-07-15 20:04:14.573727] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.367 [2024-07-15 20:04:14.573733] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.367 [2024-07-15 20:04:14.573739] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.367 [2024-07-15 20:04:14.573884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.367 [2024-07-15 20:04:14.573999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.367 [2024-07-15 20:04:14.574170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.367 [2024-07-15 20:04:14.574171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.937 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:17.937 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:17.937 20:04:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:17.937 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:17.937 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.937 20:04:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.937 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:17.937 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.937 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.937 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.937 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:17.937 "tick_rate": 2400000000, 00:09:17.937 "poll_groups": [ 00:09:17.937 { 00:09:17.937 "name": "nvmf_tgt_poll_group_000", 00:09:17.937 "admin_qpairs": 0, 00:09:17.937 "io_qpairs": 0, 00:09:17.937 "current_admin_qpairs": 0, 00:09:17.937 "current_io_qpairs": 0, 00:09:17.937 "pending_bdev_io": 0, 00:09:17.937 "completed_nvme_io": 0, 00:09:17.937 "transports": [] 00:09:17.937 }, 00:09:17.937 { 00:09:17.937 "name": "nvmf_tgt_poll_group_001", 00:09:17.937 "admin_qpairs": 0, 00:09:17.938 "io_qpairs": 0, 00:09:17.938 "current_admin_qpairs": 0, 00:09:17.938 "current_io_qpairs": 0, 00:09:17.938 "pending_bdev_io": 0, 00:09:17.938 "completed_nvme_io": 0, 00:09:17.938 "transports": [] 00:09:17.938 }, 00:09:17.938 { 00:09:17.938 "name": "nvmf_tgt_poll_group_002", 00:09:17.938 "admin_qpairs": 0, 00:09:17.938 "io_qpairs": 0, 00:09:17.938 "current_admin_qpairs": 0, 00:09:17.938 "current_io_qpairs": 0, 00:09:17.938 "pending_bdev_io": 0, 00:09:17.938 "completed_nvme_io": 0, 00:09:17.938 "transports": [] 00:09:17.938 }, 00:09:17.938 { 00:09:17.938 "name": "nvmf_tgt_poll_group_003", 00:09:17.938 "admin_qpairs": 0, 00:09:17.938 "io_qpairs": 0, 00:09:17.938 "current_admin_qpairs": 0, 00:09:17.938 "current_io_qpairs": 0, 00:09:17.938 "pending_bdev_io": 0, 00:09:17.938 "completed_nvme_io": 0, 00:09:17.938 "transports": [] 00:09:17.938 } 00:09:17.938 ] 00:09:17.938 }' 00:09:17.938 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:17.938 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:17.938 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:17.938 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:17.938 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:17.938 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:17.938 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:17.938 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:17.938 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.938 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.938 [2024-07-15 20:04:15.366160] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:18.198 "tick_rate": 2400000000, 00:09:18.198 "poll_groups": [ 00:09:18.198 { 00:09:18.198 "name": "nvmf_tgt_poll_group_000", 00:09:18.198 "admin_qpairs": 0, 00:09:18.198 "io_qpairs": 0, 00:09:18.198 "current_admin_qpairs": 0, 00:09:18.198 "current_io_qpairs": 0, 00:09:18.198 "pending_bdev_io": 0, 00:09:18.198 "completed_nvme_io": 0, 00:09:18.198 "transports": [ 00:09:18.198 { 00:09:18.198 "trtype": "TCP" 00:09:18.198 } 00:09:18.198 ] 00:09:18.198 }, 00:09:18.198 { 00:09:18.198 "name": "nvmf_tgt_poll_group_001", 00:09:18.198 "admin_qpairs": 0, 00:09:18.198 "io_qpairs": 0, 00:09:18.198 "current_admin_qpairs": 0, 00:09:18.198 "current_io_qpairs": 0, 00:09:18.198 "pending_bdev_io": 0, 00:09:18.198 "completed_nvme_io": 0, 00:09:18.198 "transports": [ 00:09:18.198 { 00:09:18.198 "trtype": "TCP" 00:09:18.198 } 00:09:18.198 ] 00:09:18.198 }, 00:09:18.198 { 00:09:18.198 "name": "nvmf_tgt_poll_group_002", 00:09:18.198 "admin_qpairs": 0, 00:09:18.198 "io_qpairs": 0, 00:09:18.198 "current_admin_qpairs": 0, 00:09:18.198 "current_io_qpairs": 0, 00:09:18.198 "pending_bdev_io": 0, 00:09:18.198 "completed_nvme_io": 0, 00:09:18.198 "transports": [ 00:09:18.198 { 00:09:18.198 "trtype": "TCP" 00:09:18.198 } 00:09:18.198 ] 00:09:18.198 }, 00:09:18.198 { 00:09:18.198 "name": "nvmf_tgt_poll_group_003", 00:09:18.198 "admin_qpairs": 0, 00:09:18.198 "io_qpairs": 0, 00:09:18.198 "current_admin_qpairs": 0, 00:09:18.198 "current_io_qpairs": 0, 00:09:18.198 "pending_bdev_io": 0, 00:09:18.198 "completed_nvme_io": 0, 00:09:18.198 "transports": [ 00:09:18.198 { 00:09:18.198 "trtype": "TCP" 00:09:18.198 } 00:09:18.198 ] 00:09:18.198 } 00:09:18.198 ] 00:09:18.198 }' 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.198 Malloc1 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.198 [2024-07-15 20:04:15.557894] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:18.198 [2024-07-15 20:04:15.584854] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:18.198 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:18.198 could not add new controller: failed to write to nvme-fabrics device 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.198 20:04:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:20.200 20:04:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:20.200 20:04:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:20.200 20:04:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:20.200 20:04:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:20.200 20:04:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:22.115 [2024-07-15 20:04:19.348583] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:22.115 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:22.115 could not add new controller: failed to write to nvme-fabrics device 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.115 20:04:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:24.028 20:04:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:24.028 20:04:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:24.028 20:04:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.028 20:04:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:24.028 20:04:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:25.944 20:04:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:25.944 20:04:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:25.944 20:04:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.944 20:04:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:25.944 20:04:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.944 20:04:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:25.944 20:04:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:25.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.944 [2024-07-15 20:04:23.144466] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.944 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.945 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.945 20:04:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:25.945 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.945 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.945 20:04:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.945 20:04:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:27.328 20:04:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:27.328 20:04:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:27.328 20:04:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:27.328 20:04:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:27.328 20:04:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:29.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.896 [2024-07-15 20:04:26.909192] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.896 20:04:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:31.277 20:04:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:31.277 20:04:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:31.277 20:04:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.277 20:04:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:31.277 20:04:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.192 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.453 [2024-07-15 20:04:30.628959] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.453 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.453 20:04:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:33.453 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.453 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.453 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.453 20:04:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:33.453 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.453 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.453 20:04:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.453 20:04:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:34.838 20:04:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:34.838 20:04:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:34.838 20:04:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:34.838 20:04:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:34.838 20:04:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:37.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.382 [2024-07-15 20:04:34.374012] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.382 20:04:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:38.768 20:04:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:38.768 20:04:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:38.768 20:04:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:38.768 20:04:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:38.768 20:04:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:40.684 20:04:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:40.684 20:04:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:40.684 20:04:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.684 20:04:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:40.684 20:04:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.684 20:04:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:40.684 20:04:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.684 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.945 [2024-07-15 20:04:38.123608] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.945 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.945 20:04:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:40.945 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.945 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.945 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.945 20:04:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:40.945 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.945 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.945 20:04:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.945 20:04:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:42.331 20:04:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:42.331 20:04:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:42.331 20:04:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:42.331 20:04:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:42.331 20:04:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:44.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.904 [2024-07-15 20:04:41.893012] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.904 [2024-07-15 20:04:41.953116] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.904 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:44.905 20:04:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.905 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 [2024-07-15 20:04:42.017322] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 [2024-07-15 20:04:42.077503] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 [2024-07-15 20:04:42.137678] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:44.905 "tick_rate": 2400000000, 00:09:44.905 "poll_groups": [ 00:09:44.905 { 00:09:44.905 "name": "nvmf_tgt_poll_group_000", 00:09:44.905 "admin_qpairs": 0, 00:09:44.905 "io_qpairs": 224, 00:09:44.905 "current_admin_qpairs": 0, 00:09:44.905 "current_io_qpairs": 0, 00:09:44.905 "pending_bdev_io": 0, 00:09:44.905 "completed_nvme_io": 475, 00:09:44.905 "transports": [ 00:09:44.905 { 00:09:44.905 "trtype": "TCP" 00:09:44.905 } 00:09:44.905 ] 00:09:44.905 }, 00:09:44.905 { 00:09:44.905 "name": "nvmf_tgt_poll_group_001", 00:09:44.905 "admin_qpairs": 1, 00:09:44.905 "io_qpairs": 223, 00:09:44.905 "current_admin_qpairs": 0, 00:09:44.905 "current_io_qpairs": 0, 00:09:44.905 "pending_bdev_io": 0, 00:09:44.905 "completed_nvme_io": 271, 00:09:44.905 "transports": [ 00:09:44.905 { 00:09:44.905 "trtype": "TCP" 00:09:44.905 } 00:09:44.905 ] 00:09:44.905 }, 00:09:44.905 { 00:09:44.905 "name": "nvmf_tgt_poll_group_002", 00:09:44.905 "admin_qpairs": 6, 00:09:44.905 "io_qpairs": 218, 00:09:44.905 "current_admin_qpairs": 0, 00:09:44.905 "current_io_qpairs": 0, 00:09:44.905 "pending_bdev_io": 0, 00:09:44.905 "completed_nvme_io": 220, 00:09:44.905 "transports": [ 00:09:44.905 { 00:09:44.905 "trtype": "TCP" 00:09:44.905 } 00:09:44.905 ] 00:09:44.905 }, 00:09:44.905 { 00:09:44.905 "name": "nvmf_tgt_poll_group_003", 00:09:44.905 "admin_qpairs": 0, 00:09:44.905 "io_qpairs": 224, 00:09:44.905 "current_admin_qpairs": 0, 00:09:44.905 "current_io_qpairs": 0, 00:09:44.905 "pending_bdev_io": 0, 00:09:44.905 "completed_nvme_io": 273, 00:09:44.905 "transports": [ 00:09:44.905 { 00:09:44.905 "trtype": "TCP" 00:09:44.905 } 00:09:44.905 ] 00:09:44.905 } 00:09:44.905 ] 00:09:44.905 }' 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.905 20:04:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.905 rmmod nvme_tcp 00:09:44.905 rmmod nvme_fabrics 00:09:44.905 rmmod nvme_keyring 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 835071 ']' 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 835071 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 835071 ']' 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 835071 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 835071 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 835071' 00:09:45.166 killing process with pid 835071 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 835071 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 835071 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.166 20:04:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.714 20:04:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.714 00:09:47.714 real 0m37.461s 00:09:47.714 user 1m53.819s 00:09:47.714 sys 0m7.158s 00:09:47.714 20:04:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.714 20:04:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:47.714 ************************************ 00:09:47.714 END TEST nvmf_rpc 00:09:47.714 ************************************ 00:09:47.714 20:04:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:47.714 20:04:44 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:47.714 20:04:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:47.714 20:04:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.714 20:04:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:47.714 ************************************ 00:09:47.714 START TEST nvmf_invalid 00:09:47.714 ************************************ 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:47.714 * Looking for test storage... 00:09:47.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:47.714 20:04:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:54.298 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:54.298 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:54.298 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:54.298 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.298 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.560 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.560 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.560 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:54.560 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.560 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.560 20:04:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:54.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.537 ms 00:09:54.820 00:09:54.820 --- 10.0.0.2 ping statistics --- 00:09:54.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.820 rtt min/avg/max/mdev = 0.537/0.537/0.537/0.000 ms 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:09:54.820 00:09:54.820 --- 10.0.0.1 ping statistics --- 00:09:54.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.820 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=844867 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 844867 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 844867 ']' 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:54.820 20:04:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:54.820 [2024-07-15 20:04:52.132211] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:09:54.820 [2024-07-15 20:04:52.132278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.820 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.820 [2024-07-15 20:04:52.204200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.081 [2024-07-15 20:04:52.279256] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.081 [2024-07-15 20:04:52.279295] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.081 [2024-07-15 20:04:52.279303] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.081 [2024-07-15 20:04:52.279309] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.081 [2024-07-15 20:04:52.279315] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.081 [2024-07-15 20:04:52.279399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.081 [2024-07-15 20:04:52.279512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.081 [2024-07-15 20:04:52.279668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.081 [2024-07-15 20:04:52.279670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.653 20:04:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:55.653 20:04:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:55.653 20:04:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:55.653 20:04:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:55.653 20:04:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:55.653 20:04:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.653 20:04:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:55.653 20:04:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15797 00:09:55.914 [2024-07-15 20:04:53.092052] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:55.914 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:55.914 { 00:09:55.914 "nqn": "nqn.2016-06.io.spdk:cnode15797", 00:09:55.914 "tgt_name": "foobar", 00:09:55.914 "method": "nvmf_create_subsystem", 00:09:55.914 "req_id": 1 00:09:55.914 } 00:09:55.914 Got JSON-RPC error response 00:09:55.914 response: 00:09:55.914 { 00:09:55.914 "code": -32603, 00:09:55.914 "message": "Unable to find target foobar" 00:09:55.914 }' 00:09:55.914 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:55.914 { 00:09:55.914 "nqn": "nqn.2016-06.io.spdk:cnode15797", 00:09:55.914 "tgt_name": "foobar", 00:09:55.914 "method": "nvmf_create_subsystem", 00:09:55.914 "req_id": 1 00:09:55.914 } 00:09:55.914 Got JSON-RPC error response 00:09:55.914 response: 00:09:55.914 { 00:09:55.914 "code": -32603, 00:09:55.914 "message": "Unable to find target foobar" 00:09:55.914 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:55.914 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:55.914 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode20284 00:09:55.914 [2024-07-15 20:04:53.260584] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20284: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:55.914 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:55.914 { 00:09:55.914 "nqn": "nqn.2016-06.io.spdk:cnode20284", 00:09:55.914 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:55.914 "method": "nvmf_create_subsystem", 00:09:55.914 "req_id": 1 00:09:55.914 } 00:09:55.914 Got JSON-RPC error response 00:09:55.914 response: 00:09:55.914 { 00:09:55.914 "code": -32602, 00:09:55.914 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:55.914 }' 00:09:55.914 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:55.914 { 00:09:55.914 "nqn": "nqn.2016-06.io.spdk:cnode20284", 00:09:55.914 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:55.914 "method": "nvmf_create_subsystem", 00:09:55.914 "req_id": 1 00:09:55.914 } 00:09:55.914 Got JSON-RPC error response 00:09:55.914 response: 00:09:55.914 { 00:09:55.914 "code": -32602, 00:09:55.914 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:55.914 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:55.914 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:55.914 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24747 00:09:56.176 [2024-07-15 20:04:53.437187] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24747: invalid model number 'SPDK_Controller' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:56.176 { 00:09:56.176 "nqn": "nqn.2016-06.io.spdk:cnode24747", 00:09:56.176 "model_number": "SPDK_Controller\u001f", 00:09:56.176 "method": "nvmf_create_subsystem", 00:09:56.176 "req_id": 1 00:09:56.176 } 00:09:56.176 Got JSON-RPC error response 00:09:56.176 response: 00:09:56.176 { 00:09:56.176 "code": -32602, 00:09:56.176 "message": "Invalid MN SPDK_Controller\u001f" 00:09:56.176 }' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:56.176 { 00:09:56.176 "nqn": "nqn.2016-06.io.spdk:cnode24747", 00:09:56.176 "model_number": "SPDK_Controller\u001f", 00:09:56.176 "method": "nvmf_create_subsystem", 00:09:56.176 "req_id": 1 00:09:56.176 } 00:09:56.176 Got JSON-RPC error response 00:09:56.176 response: 00:09:56.176 { 00:09:56.176 "code": -32602, 00:09:56.176 "message": "Invalid MN SPDK_Controller\u001f" 00:09:56.176 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:09:56.176 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.177 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.177 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:56.177 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:56.177 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:56.177 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.177 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.177 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:56.177 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:56.177 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:56.177 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.177 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.177 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:56.177 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ % == \- ]] 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '%1n(-bm^/1aoDJn.74o%x' 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '%1n(-bm^/1aoDJn.74o%x' nqn.2016-06.io.spdk:cnode14924 00:09:56.437 [2024-07-15 20:04:53.778227] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14924: invalid serial number '%1n(-bm^/1aoDJn.74o%x' 00:09:56.437 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:56.437 { 00:09:56.437 "nqn": "nqn.2016-06.io.spdk:cnode14924", 00:09:56.437 "serial_number": "%1n(-bm^/1aoDJn.74o%x", 00:09:56.437 "method": "nvmf_create_subsystem", 00:09:56.437 "req_id": 1 00:09:56.437 } 00:09:56.437 Got JSON-RPC error response 00:09:56.437 response: 00:09:56.437 { 00:09:56.437 "code": -32602, 00:09:56.437 "message": "Invalid SN %1n(-bm^/1aoDJn.74o%x" 00:09:56.437 }' 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:56.438 { 00:09:56.438 "nqn": "nqn.2016-06.io.spdk:cnode14924", 00:09:56.438 "serial_number": "%1n(-bm^/1aoDJn.74o%x", 00:09:56.438 "method": "nvmf_create_subsystem", 00:09:56.438 "req_id": 1 00:09:56.438 } 00:09:56.438 Got JSON-RPC error response 00:09:56.438 response: 00:09:56.438 { 00:09:56.438 "code": -32602, 00:09:56.438 "message": "Invalid SN %1n(-bm^/1aoDJn.74o%x" 00:09:56.438 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:56.438 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.699 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.700 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:56.700 20:04:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ a == \- ]] 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'ax<$gof$Mr?j?o=|WZhk@SqvX#3tSG]/SS2$Y@EOK' 00:09:56.700 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'ax<$gof$Mr?j?o=|WZhk@SqvX#3tSG]/SS2$Y@EOK' nqn.2016-06.io.spdk:cnode16250 00:09:56.961 [2024-07-15 20:04:54.255729] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16250: invalid model number 'ax<$gof$Mr?j?o=|WZhk@SqvX#3tSG]/SS2$Y@EOK' 00:09:56.961 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:56.961 { 00:09:56.961 "nqn": "nqn.2016-06.io.spdk:cnode16250", 00:09:56.961 "model_number": "ax<$gof$Mr?j?o=|WZhk@SqvX#3tSG]/SS2$Y@EOK", 00:09:56.961 "method": "nvmf_create_subsystem", 00:09:56.961 "req_id": 1 00:09:56.961 } 00:09:56.961 Got JSON-RPC error response 00:09:56.961 response: 00:09:56.961 { 00:09:56.961 "code": -32602, 00:09:56.961 "message": "Invalid MN ax<$gof$Mr?j?o=|WZhk@SqvX#3tSG]/SS2$Y@EOK" 00:09:56.961 }' 00:09:56.961 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:56.961 { 00:09:56.961 "nqn": "nqn.2016-06.io.spdk:cnode16250", 00:09:56.961 "model_number": "ax<$gof$Mr?j?o=|WZhk@SqvX#3tSG]/SS2$Y@EOK", 00:09:56.961 "method": "nvmf_create_subsystem", 00:09:56.961 "req_id": 1 00:09:56.961 } 00:09:56.961 Got JSON-RPC error response 00:09:56.961 response: 00:09:56.961 { 00:09:56.961 "code": -32602, 00:09:56.961 "message": "Invalid MN ax<$gof$Mr?j?o=|WZhk@SqvX#3tSG]/SS2$Y@EOK" 00:09:56.961 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:56.961 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:57.222 [2024-07-15 20:04:54.428393] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.222 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:57.222 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:57.222 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:57.222 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:57.222 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:57.222 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:57.483 [2024-07-15 20:04:54.778539] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:57.483 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:57.483 { 00:09:57.483 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:57.483 "listen_address": { 00:09:57.483 "trtype": "tcp", 00:09:57.483 "traddr": "", 00:09:57.483 "trsvcid": "4421" 00:09:57.483 }, 00:09:57.483 "method": "nvmf_subsystem_remove_listener", 00:09:57.483 "req_id": 1 00:09:57.483 } 00:09:57.483 Got JSON-RPC error response 00:09:57.483 response: 00:09:57.483 { 00:09:57.483 "code": -32602, 00:09:57.483 "message": "Invalid parameters" 00:09:57.483 }' 00:09:57.483 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:57.483 { 00:09:57.483 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:57.483 "listen_address": { 00:09:57.483 "trtype": "tcp", 00:09:57.483 "traddr": "", 00:09:57.483 "trsvcid": "4421" 00:09:57.483 }, 00:09:57.483 "method": "nvmf_subsystem_remove_listener", 00:09:57.483 "req_id": 1 00:09:57.483 } 00:09:57.483 Got JSON-RPC error response 00:09:57.483 response: 00:09:57.483 { 00:09:57.483 "code": -32602, 00:09:57.483 "message": "Invalid parameters" 00:09:57.483 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:57.483 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5423 -i 0 00:09:57.744 [2024-07-15 20:04:54.951037] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5423: invalid cntlid range [0-65519] 00:09:57.744 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:57.744 { 00:09:57.744 "nqn": "nqn.2016-06.io.spdk:cnode5423", 00:09:57.744 "min_cntlid": 0, 00:09:57.744 "method": "nvmf_create_subsystem", 00:09:57.744 "req_id": 1 00:09:57.744 } 00:09:57.744 Got JSON-RPC error response 00:09:57.744 response: 00:09:57.744 { 00:09:57.744 "code": -32602, 00:09:57.744 "message": "Invalid cntlid range [0-65519]" 00:09:57.744 }' 00:09:57.744 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:57.744 { 00:09:57.744 "nqn": "nqn.2016-06.io.spdk:cnode5423", 00:09:57.744 "min_cntlid": 0, 00:09:57.744 "method": "nvmf_create_subsystem", 00:09:57.744 "req_id": 1 00:09:57.744 } 00:09:57.744 Got JSON-RPC error response 00:09:57.744 response: 00:09:57.744 { 00:09:57.744 "code": -32602, 00:09:57.744 "message": "Invalid cntlid range [0-65519]" 00:09:57.744 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:57.744 20:04:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7765 -i 65520 00:09:57.744 [2024-07-15 20:04:55.127601] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7765: invalid cntlid range [65520-65519] 00:09:57.744 20:04:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:57.744 { 00:09:57.744 "nqn": "nqn.2016-06.io.spdk:cnode7765", 00:09:57.744 "min_cntlid": 65520, 00:09:57.744 "method": "nvmf_create_subsystem", 00:09:57.744 "req_id": 1 00:09:57.744 } 00:09:57.744 Got JSON-RPC error response 00:09:57.744 response: 00:09:57.744 { 00:09:57.744 "code": -32602, 00:09:57.744 "message": "Invalid cntlid range [65520-65519]" 00:09:57.744 }' 00:09:57.744 20:04:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:57.744 { 00:09:57.744 "nqn": "nqn.2016-06.io.spdk:cnode7765", 00:09:57.744 "min_cntlid": 65520, 00:09:57.744 "method": "nvmf_create_subsystem", 00:09:57.744 "req_id": 1 00:09:57.744 } 00:09:57.744 Got JSON-RPC error response 00:09:57.744 response: 00:09:57.744 { 00:09:57.744 "code": -32602, 00:09:57.744 "message": "Invalid cntlid range [65520-65519]" 00:09:57.744 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:57.744 20:04:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16642 -I 0 00:09:58.004 [2024-07-15 20:04:55.296160] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16642: invalid cntlid range [1-0] 00:09:58.004 20:04:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:58.004 { 00:09:58.004 "nqn": "nqn.2016-06.io.spdk:cnode16642", 00:09:58.004 "max_cntlid": 0, 00:09:58.004 "method": "nvmf_create_subsystem", 00:09:58.004 "req_id": 1 00:09:58.004 } 00:09:58.004 Got JSON-RPC error response 00:09:58.004 response: 00:09:58.004 { 00:09:58.004 "code": -32602, 00:09:58.004 "message": "Invalid cntlid range [1-0]" 00:09:58.004 }' 00:09:58.004 20:04:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:58.004 { 00:09:58.004 "nqn": "nqn.2016-06.io.spdk:cnode16642", 00:09:58.004 "max_cntlid": 0, 00:09:58.004 "method": "nvmf_create_subsystem", 00:09:58.004 "req_id": 1 00:09:58.005 } 00:09:58.005 Got JSON-RPC error response 00:09:58.005 response: 00:09:58.005 { 00:09:58.005 "code": -32602, 00:09:58.005 "message": "Invalid cntlid range [1-0]" 00:09:58.005 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:58.005 20:04:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23992 -I 65520 00:09:58.265 [2024-07-15 20:04:55.464657] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23992: invalid cntlid range [1-65520] 00:09:58.265 20:04:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:58.265 { 00:09:58.265 "nqn": "nqn.2016-06.io.spdk:cnode23992", 00:09:58.265 "max_cntlid": 65520, 00:09:58.265 "method": "nvmf_create_subsystem", 00:09:58.265 "req_id": 1 00:09:58.265 } 00:09:58.265 Got JSON-RPC error response 00:09:58.265 response: 00:09:58.265 { 00:09:58.265 "code": -32602, 00:09:58.265 "message": "Invalid cntlid range [1-65520]" 00:09:58.265 }' 00:09:58.265 20:04:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:58.265 { 00:09:58.265 "nqn": "nqn.2016-06.io.spdk:cnode23992", 00:09:58.265 "max_cntlid": 65520, 00:09:58.265 "method": "nvmf_create_subsystem", 00:09:58.265 "req_id": 1 00:09:58.265 } 00:09:58.265 Got JSON-RPC error response 00:09:58.265 response: 00:09:58.265 { 00:09:58.265 "code": -32602, 00:09:58.265 "message": "Invalid cntlid range [1-65520]" 00:09:58.265 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:58.265 20:04:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9614 -i 6 -I 5 00:09:58.265 [2024-07-15 20:04:55.633192] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9614: invalid cntlid range [6-5] 00:09:58.265 20:04:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:58.265 { 00:09:58.265 "nqn": "nqn.2016-06.io.spdk:cnode9614", 00:09:58.265 "min_cntlid": 6, 00:09:58.265 "max_cntlid": 5, 00:09:58.265 "method": "nvmf_create_subsystem", 00:09:58.265 "req_id": 1 00:09:58.265 } 00:09:58.265 Got JSON-RPC error response 00:09:58.265 response: 00:09:58.265 { 00:09:58.265 "code": -32602, 00:09:58.265 "message": "Invalid cntlid range [6-5]" 00:09:58.265 }' 00:09:58.265 20:04:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:58.265 { 00:09:58.265 "nqn": "nqn.2016-06.io.spdk:cnode9614", 00:09:58.265 "min_cntlid": 6, 00:09:58.265 "max_cntlid": 5, 00:09:58.265 "method": "nvmf_create_subsystem", 00:09:58.265 "req_id": 1 00:09:58.265 } 00:09:58.265 Got JSON-RPC error response 00:09:58.265 response: 00:09:58.265 { 00:09:58.265 "code": -32602, 00:09:58.265 "message": "Invalid cntlid range [6-5]" 00:09:58.265 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:58.265 20:04:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:58.527 { 00:09:58.527 "name": "foobar", 00:09:58.527 "method": "nvmf_delete_target", 00:09:58.527 "req_id": 1 00:09:58.527 } 00:09:58.527 Got JSON-RPC error response 00:09:58.527 response: 00:09:58.527 { 00:09:58.527 "code": -32602, 00:09:58.527 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:58.527 }' 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:58.527 { 00:09:58.527 "name": "foobar", 00:09:58.527 "method": "nvmf_delete_target", 00:09:58.527 "req_id": 1 00:09:58.527 } 00:09:58.527 Got JSON-RPC error response 00:09:58.527 response: 00:09:58.527 { 00:09:58.527 "code": -32602, 00:09:58.527 "message": "The specified target doesn't exist, cannot delete it." 00:09:58.527 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:58.527 rmmod nvme_tcp 00:09:58.527 rmmod nvme_fabrics 00:09:58.527 rmmod nvme_keyring 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 844867 ']' 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 844867 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 844867 ']' 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 844867 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 844867 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 844867' 00:09:58.527 killing process with pid 844867 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 844867 00:09:58.527 20:04:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 844867 00:09:58.787 20:04:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:58.787 20:04:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:58.787 20:04:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:58.787 20:04:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:58.787 20:04:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:58.787 20:04:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.787 20:04:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:58.787 20:04:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.699 20:04:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:00.699 00:10:00.699 real 0m13.378s 00:10:00.699 user 0m19.187s 00:10:00.699 sys 0m6.246s 00:10:00.699 20:04:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.699 20:04:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:00.699 ************************************ 00:10:00.699 END TEST nvmf_invalid 00:10:00.699 ************************************ 00:10:00.961 20:04:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:00.961 20:04:58 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:00.961 20:04:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:00.961 20:04:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.961 20:04:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:00.961 ************************************ 00:10:00.961 START TEST nvmf_abort 00:10:00.961 ************************************ 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:00.961 * Looking for test storage... 00:10:00.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:00.961 20:04:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:09.137 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:09.137 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:09.137 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.137 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:09.138 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:09.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:10:09.138 00:10:09.138 --- 10.0.0.2 ping statistics --- 00:10:09.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.138 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:09.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:10:09.138 00:10:09.138 --- 10.0.0.1 ping statistics --- 00:10:09.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.138 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=850041 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 850041 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 850041 ']' 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.138 20:05:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:09.138 [2024-07-15 20:05:05.520194] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:10:09.138 [2024-07-15 20:05:05.520243] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.138 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.138 [2024-07-15 20:05:05.604942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:09.138 [2024-07-15 20:05:05.676182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.138 [2024-07-15 20:05:05.676229] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.138 [2024-07-15 20:05:05.676237] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.138 [2024-07-15 20:05:05.676244] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.138 [2024-07-15 20:05:05.676250] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.138 [2024-07-15 20:05:05.676361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.138 [2024-07-15 20:05:05.676623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.138 [2024-07-15 20:05:05.676624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:09.138 [2024-07-15 20:05:06.340761] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:09.138 Malloc0 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:09.138 Delay0 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:09.138 [2024-07-15 20:05:06.420579] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.138 20:05:06 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:09.138 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.138 [2024-07-15 20:05:06.488908] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:11.680 Initializing NVMe Controllers 00:10:11.680 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:11.680 controller IO queue size 128 less than required 00:10:11.680 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:11.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:11.680 Initialization complete. Launching workers. 00:10:11.680 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30764 00:10:11.680 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30825, failed to submit 62 00:10:11.680 success 30768, unsuccess 57, failed 0 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:11.680 rmmod nvme_tcp 00:10:11.680 rmmod nvme_fabrics 00:10:11.680 rmmod nvme_keyring 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 850041 ']' 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 850041 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 850041 ']' 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 850041 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 850041 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 850041' 00:10:11.680 killing process with pid 850041 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 850041 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 850041 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:11.680 20:05:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.599 20:05:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:13.599 00:10:13.599 real 0m12.750s 00:10:13.599 user 0m13.214s 00:10:13.599 sys 0m6.218s 00:10:13.599 20:05:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:13.599 20:05:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:13.599 ************************************ 00:10:13.599 END TEST nvmf_abort 00:10:13.599 ************************************ 00:10:13.599 20:05:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:13.599 20:05:10 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:13.599 20:05:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:13.599 20:05:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.599 20:05:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:13.599 ************************************ 00:10:13.599 START TEST nvmf_ns_hotplug_stress 00:10:13.599 ************************************ 00:10:13.599 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:13.860 * Looking for test storage... 00:10:13.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:13.860 20:05:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:20.484 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:20.484 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:20.484 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:20.484 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.484 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.745 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.745 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.745 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:20.745 20:05:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.745 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.745 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.745 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:20.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:10:20.745 00:10:20.745 --- 10.0.0.2 ping statistics --- 00:10:20.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.745 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:10:20.745 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:10:20.745 00:10:20.745 --- 10.0.0.1 ping statistics --- 00:10:20.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.745 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=854732 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 854732 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 854732 ']' 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:20.746 20:05:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.006 [2024-07-15 20:05:18.194212] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:10:21.006 [2024-07-15 20:05:18.194259] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.006 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.006 [2024-07-15 20:05:18.275692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:21.006 [2024-07-15 20:05:18.339977] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.006 [2024-07-15 20:05:18.340014] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.006 [2024-07-15 20:05:18.340021] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.006 [2024-07-15 20:05:18.340027] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.006 [2024-07-15 20:05:18.340033] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.006 [2024-07-15 20:05:18.340111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.006 [2024-07-15 20:05:18.340277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.006 [2024-07-15 20:05:18.340370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.947 20:05:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:21.947 20:05:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:21.947 20:05:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:21.947 20:05:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:21.947 20:05:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.947 20:05:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.947 20:05:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:21.947 20:05:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:21.947 [2024-07-15 20:05:19.212172] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.947 20:05:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:22.208 20:05:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.208 [2024-07-15 20:05:19.549541] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.208 20:05:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:22.468 20:05:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:22.468 Malloc0 00:10:22.729 20:05:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:22.729 Delay0 00:10:22.729 20:05:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.989 20:05:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:22.989 NULL1 00:10:22.989 20:05:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:23.250 20:05:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=855390 00:10:23.250 20:05:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:23.250 20:05:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:23.250 20:05:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.250 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.510 20:05:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.510 20:05:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:23.510 20:05:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:23.770 [2024-07-15 20:05:21.051873] bdev.c:5033:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:10:23.770 true 00:10:23.770 20:05:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:23.770 20:05:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.030 20:05:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.030 20:05:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:24.030 20:05:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:24.290 true 00:10:24.290 20:05:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:24.290 20:05:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.550 20:05:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.550 20:05:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:24.550 20:05:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:24.811 true 00:10:24.811 20:05:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:24.811 20:05:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.811 20:05:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.071 20:05:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:25.071 20:05:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:25.331 true 00:10:25.331 20:05:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:25.331 20:05:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.331 20:05:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.591 20:05:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:25.591 20:05:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:25.851 true 00:10:25.851 20:05:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:25.851 20:05:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.851 20:05:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.112 20:05:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:26.112 20:05:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:26.373 true 00:10:26.373 20:05:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:26.373 20:05:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.373 20:05:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.633 20:05:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:26.633 20:05:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:26.633 true 00:10:26.893 20:05:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:26.893 20:05:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.893 20:05:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.154 20:05:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:27.154 20:05:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:27.154 true 00:10:27.154 20:05:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:27.154 20:05:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.415 20:05:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.750 20:05:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:27.750 20:05:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:27.750 true 00:10:27.750 20:05:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:27.750 20:05:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.690 Read completed with error (sct=0, sc=11) 00:10:28.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.690 20:05:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.950 20:05:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:28.950 20:05:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:28.950 true 00:10:28.950 20:05:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:28.950 20:05:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.210 20:05:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.470 20:05:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:29.470 20:05:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:29.470 true 00:10:29.470 20:05:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:29.470 20:05:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.731 20:05:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.991 20:05:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:29.991 20:05:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:29.991 true 00:10:29.991 20:05:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:29.991 20:05:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.251 20:05:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.510 20:05:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:30.510 20:05:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:30.510 true 00:10:30.510 20:05:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:30.510 20:05:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.769 20:05:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.769 20:05:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:30.769 20:05:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:31.029 true 00:10:31.029 20:05:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:31.029 20:05:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.290 20:05:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.290 20:05:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:31.290 20:05:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:31.550 true 00:10:31.550 20:05:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:31.550 20:05:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.810 20:05:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.810 20:05:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:31.810 20:05:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:32.070 true 00:10:32.070 20:05:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:32.070 20:05:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.330 20:05:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.330 20:05:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:32.330 20:05:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:32.590 true 00:10:32.590 20:05:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:32.590 20:05:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.851 20:05:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.851 20:05:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:32.851 20:05:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:33.111 true 00:10:33.111 20:05:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:33.111 20:05:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.111 20:05:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.372 20:05:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:33.372 20:05:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:33.633 true 00:10:33.633 20:05:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:33.633 20:05:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.633 20:05:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.893 20:05:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:33.893 20:05:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:34.154 true 00:10:34.154 20:05:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:34.154 20:05:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.097 20:05:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.097 20:05:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:35.097 20:05:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:35.358 true 00:10:35.358 20:05:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:35.358 20:05:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.358 20:05:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.618 20:05:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:35.618 20:05:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:35.879 true 00:10:35.879 20:05:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:35.879 20:05:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.260 20:05:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.260 20:05:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:37.261 20:05:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:37.261 true 00:10:37.261 20:05:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:37.261 20:05:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.203 20:05:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.464 20:05:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:38.464 20:05:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:38.464 true 00:10:38.464 20:05:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:38.464 20:05:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.723 20:05:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.723 20:05:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:38.723 20:05:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:38.983 true 00:10:38.983 20:05:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:38.983 20:05:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.243 20:05:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.243 20:05:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:39.243 20:05:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:39.504 true 00:10:39.504 20:05:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:39.504 20:05:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.767 20:05:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.767 20:05:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:39.767 20:05:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:40.027 true 00:10:40.027 20:05:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:40.027 20:05:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.288 20:05:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.288 20:05:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:40.288 20:05:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:40.548 true 00:10:40.548 20:05:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:40.548 20:05:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.809 20:05:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.809 20:05:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:40.809 20:05:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:41.069 true 00:10:41.069 20:05:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:41.069 20:05:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.069 20:05:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.328 20:05:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:41.328 20:05:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:41.587 true 00:10:41.587 20:05:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:41.587 20:05:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.587 20:05:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.847 20:05:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:41.847 20:05:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:42.113 true 00:10:42.113 20:05:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:42.113 20:05:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.113 20:05:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.411 20:05:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:42.411 20:05:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:42.411 true 00:10:42.411 20:05:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:42.411 20:05:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.354 20:05:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.613 20:05:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:43.613 20:05:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:43.613 true 00:10:43.874 20:05:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:43.874 20:05:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.874 20:05:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.133 20:05:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:44.133 20:05:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:44.133 true 00:10:44.133 20:05:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:44.133 20:05:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.394 20:05:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.655 20:05:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:44.655 20:05:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:44.655 true 00:10:44.655 20:05:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:44.655 20:05:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.914 20:05:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.172 20:05:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:45.173 20:05:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:45.173 true 00:10:45.173 20:05:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:45.173 20:05:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.431 20:05:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.693 20:05:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:45.693 20:05:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:45.693 true 00:10:45.693 20:05:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:45.693 20:05:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.964 20:05:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.230 20:05:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:46.230 20:05:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:46.230 true 00:10:46.230 20:05:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:46.230 20:05:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.490 20:05:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.490 20:05:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:46.490 20:05:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:46.752 true 00:10:46.752 20:05:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:46.752 20:05:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.012 20:05:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.012 20:05:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:47.012 20:05:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:47.272 true 00:10:47.272 20:05:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:47.272 20:05:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.533 20:05:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.533 20:05:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:47.533 20:05:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:47.794 true 00:10:47.794 20:05:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:47.794 20:05:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.734 20:05:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.993 20:05:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:48.993 20:05:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:48.993 true 00:10:48.993 20:05:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:48.993 20:05:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.931 20:05:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.931 20:05:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:49.931 20:05:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:50.191 true 00:10:50.191 20:05:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:50.191 20:05:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.450 20:05:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.450 20:05:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:50.450 20:05:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:50.709 true 00:10:50.709 20:05:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:50.709 20:05:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.968 20:05:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.968 20:05:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:50.968 20:05:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:51.227 true 00:10:51.227 20:05:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:51.227 20:05:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.488 20:05:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.488 20:05:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:51.488 20:05:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:51.747 true 00:10:51.748 20:05:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:51.748 20:05:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.008 20:05:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.008 20:05:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:52.008 20:05:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:52.268 true 00:10:52.268 20:05:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:52.268 20:05:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.268 20:05:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.529 20:05:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:52.529 20:05:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:52.789 true 00:10:52.789 20:05:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:52.789 20:05:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.789 20:05:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.049 20:05:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:53.049 20:05:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:53.310 true 00:10:53.310 20:05:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:53.310 20:05:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.310 20:05:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.571 20:05:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:53.571 20:05:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:53.571 Initializing NVMe Controllers 00:10:53.571 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:53.571 Controller IO queue size 128, less than required. 00:10:53.571 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:53.571 Controller IO queue size 128, less than required. 00:10:53.571 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:53.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:53.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:53.571 Initialization complete. Launching workers. 00:10:53.571 ======================================================== 00:10:53.571 Latency(us) 00:10:53.571 Device Information : IOPS MiB/s Average min max 00:10:53.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 543.41 0.27 53412.59 2181.60 1154672.66 00:10:53.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7181.72 3.51 17765.27 2524.13 411294.97 00:10:53.571 ======================================================== 00:10:53.571 Total : 7725.13 3.77 20272.82 2181.60 1154672.66 00:10:53.571 00:10:53.571 true 00:10:53.832 20:05:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 855390 00:10:53.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (855390) - No such process 00:10:53.832 20:05:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 855390 00:10:53.832 20:05:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.832 20:05:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:54.093 20:05:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:54.093 20:05:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:54.093 20:05:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:54.093 20:05:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:54.093 20:05:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:54.093 null0 00:10:54.353 20:05:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:54.353 20:05:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:54.353 20:05:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:54.353 null1 00:10:54.353 20:05:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:54.354 20:05:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:54.354 20:05:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:54.614 null2 00:10:54.615 20:05:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:54.615 20:05:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:54.615 20:05:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:54.615 null3 00:10:54.615 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:54.615 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:54.615 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:54.876 null4 00:10:54.876 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:54.876 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:54.876 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:55.137 null5 00:10:55.137 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:55.137 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:55.137 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:55.137 null6 00:10:55.137 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:55.137 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:55.137 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:55.398 null7 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 861933 861934 861936 861938 861940 861942 861944 861946 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:55.398 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.659 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:55.659 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:55.659 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:55.659 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:55.659 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:55.659 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:55.659 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:55.659 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.659 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.659 20:05:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:55.659 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.659 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.659 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:55.659 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.659 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.659 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:55.659 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.659 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.660 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:55.660 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.660 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.660 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:55.920 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.920 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.920 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:55.920 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.920 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.921 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:55.921 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.921 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.921 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:55.921 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.921 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:55.921 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:55.921 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:55.921 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:55.921 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:55.921 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:55.921 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:55.921 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.921 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.921 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:56.182 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.447 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:56.777 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:56.777 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:56.777 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:56.777 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.777 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.777 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:56.777 20:05:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:56.777 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.039 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:57.301 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:57.301 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:57.301 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.301 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.301 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:57.301 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.301 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:57.301 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.301 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.301 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:57.301 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:57.301 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:57.301 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.301 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.301 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:57.301 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.301 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.301 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.563 20:05:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:57.824 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:57.824 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.824 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.824 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.824 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:57.824 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:57.824 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.824 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.824 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.824 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:57.824 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.824 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.825 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:57.825 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:57.825 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.825 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.825 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:57.825 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.825 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.825 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:57.825 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:57.825 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.825 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.825 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:57.825 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:57.825 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.825 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.825 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:58.085 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:58.085 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:58.085 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.085 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.085 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.085 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:58.085 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:58.085 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.085 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.085 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:58.085 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.085 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.085 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:58.085 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:58.085 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.085 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.085 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:58.346 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:58.347 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.347 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.347 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.347 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:58.347 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:58.347 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:58.347 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.347 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.347 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:58.347 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.347 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.347 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:58.606 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:58.606 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:58.606 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.606 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.606 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.606 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.606 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:58.606 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.606 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.606 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:58.606 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.606 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.606 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:58.606 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:58.606 20:05:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:58.606 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.606 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:58.866 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:59.125 rmmod nvme_tcp 00:10:59.125 rmmod nvme_fabrics 00:10:59.125 rmmod nvme_keyring 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 854732 ']' 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 854732 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 854732 ']' 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 854732 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 854732 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 854732' 00:10:59.125 killing process with pid 854732 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 854732 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 854732 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.125 20:05:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.663 20:05:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:01.663 00:11:01.663 real 0m47.595s 00:11:01.663 user 3m11.504s 00:11:01.663 sys 0m15.324s 00:11:01.663 20:05:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:01.663 20:05:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.663 ************************************ 00:11:01.663 END TEST nvmf_ns_hotplug_stress 00:11:01.663 ************************************ 00:11:01.663 20:05:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:01.663 20:05:58 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:01.663 20:05:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:01.663 20:05:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.663 20:05:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:01.663 ************************************ 00:11:01.663 START TEST nvmf_connect_stress 00:11:01.663 ************************************ 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:01.663 * Looking for test storage... 00:11:01.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:01.663 20:05:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:08.248 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:08.248 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:08.248 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:08.248 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.248 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:08.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:11:08.509 00:11:08.509 --- 10.0.0.2 ping statistics --- 00:11:08.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.509 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:11:08.509 00:11:08.509 --- 10.0.0.1 ping statistics --- 00:11:08.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.509 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=866903 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 866903 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 866903 ']' 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.509 20:06:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.770 [2024-07-15 20:06:05.953281] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:11:08.770 [2024-07-15 20:06:05.953334] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.770 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.770 [2024-07-15 20:06:06.037835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:08.770 [2024-07-15 20:06:06.127965] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.770 [2024-07-15 20:06:06.128021] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.770 [2024-07-15 20:06:06.128029] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.770 [2024-07-15 20:06:06.128036] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.770 [2024-07-15 20:06:06.128042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.770 [2024-07-15 20:06:06.128180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.770 [2024-07-15 20:06:06.128372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.770 [2024-07-15 20:06:06.128372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.340 20:06:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.340 20:06:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:09.340 20:06:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:09.340 20:06:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:09.340 20:06:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.600 20:06:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.600 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.600 20:06:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.600 20:06:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.600 [2024-07-15 20:06:06.782035] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.600 20:06:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.600 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:09.600 20:06:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.600 20:06:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.600 20:06:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.600 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.600 20:06:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.600 20:06:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.600 [2024-07-15 20:06:06.816257] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.600 20:06:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.600 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:09.600 20:06:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.600 20:06:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.600 NULL1 00:11:09.600 20:06:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.600 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=867227 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.601 20:06:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.861 20:06:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.861 20:06:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:09.861 20:06:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.861 20:06:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.861 20:06:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.431 20:06:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.431 20:06:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:10.431 20:06:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.431 20:06:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.431 20:06:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.691 20:06:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.691 20:06:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:10.691 20:06:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.691 20:06:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.691 20:06:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.952 20:06:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.952 20:06:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:10.952 20:06:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.952 20:06:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.952 20:06:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.213 20:06:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.213 20:06:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:11.213 20:06:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.213 20:06:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.213 20:06:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.473 20:06:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.473 20:06:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:11.473 20:06:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.473 20:06:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.473 20:06:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.045 20:06:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.045 20:06:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:12.045 20:06:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.045 20:06:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.045 20:06:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.307 20:06:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.307 20:06:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:12.307 20:06:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.307 20:06:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.307 20:06:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.569 20:06:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.569 20:06:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:12.569 20:06:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.569 20:06:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.569 20:06:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.829 20:06:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.829 20:06:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:12.829 20:06:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.829 20:06:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.829 20:06:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.090 20:06:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.090 20:06:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:13.090 20:06:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.090 20:06:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.090 20:06:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.662 20:06:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.662 20:06:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:13.662 20:06:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.662 20:06:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.662 20:06:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.923 20:06:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.923 20:06:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:13.923 20:06:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.923 20:06:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.923 20:06:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.185 20:06:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.185 20:06:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:14.185 20:06:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.185 20:06:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.185 20:06:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.447 20:06:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.447 20:06:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:14.447 20:06:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.447 20:06:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.447 20:06:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.707 20:06:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.707 20:06:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:14.707 20:06:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.707 20:06:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.707 20:06:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.279 20:06:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.279 20:06:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:15.279 20:06:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.279 20:06:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.279 20:06:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.540 20:06:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.540 20:06:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:15.540 20:06:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.540 20:06:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.540 20:06:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.851 20:06:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.851 20:06:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:15.851 20:06:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.851 20:06:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.851 20:06:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.110 20:06:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.110 20:06:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:16.110 20:06:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.110 20:06:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.110 20:06:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.370 20:06:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.370 20:06:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:16.370 20:06:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.370 20:06:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.370 20:06:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.942 20:06:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.942 20:06:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:16.942 20:06:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.942 20:06:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.942 20:06:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.203 20:06:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.203 20:06:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:17.203 20:06:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.203 20:06:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.203 20:06:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.464 20:06:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.464 20:06:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:17.464 20:06:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.464 20:06:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.464 20:06:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.725 20:06:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.725 20:06:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:17.725 20:06:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.725 20:06:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.725 20:06:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.986 20:06:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.986 20:06:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:17.986 20:06:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.986 20:06:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.986 20:06:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.558 20:06:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.558 20:06:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:18.558 20:06:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.558 20:06:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.558 20:06:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.820 20:06:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.820 20:06:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:18.820 20:06:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.820 20:06:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.820 20:06:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.081 20:06:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.081 20:06:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:19.081 20:06:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.081 20:06:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.081 20:06:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.342 20:06:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.342 20:06:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:19.342 20:06:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.342 20:06:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.342 20:06:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.602 20:06:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.602 20:06:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:19.602 20:06:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.602 20:06:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.602 20:06:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.602 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867227 00:11:20.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (867227) - No such process 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 867227 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:20.173 rmmod nvme_tcp 00:11:20.173 rmmod nvme_fabrics 00:11:20.173 rmmod nvme_keyring 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 866903 ']' 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 866903 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 866903 ']' 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 866903 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 866903 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:20.173 20:06:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:20.174 20:06:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 866903' 00:11:20.174 killing process with pid 866903 00:11:20.174 20:06:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 866903 00:11:20.174 20:06:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 866903 00:11:20.174 20:06:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:20.174 20:06:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:20.174 20:06:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:20.174 20:06:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:20.174 20:06:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:20.174 20:06:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.174 20:06:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.174 20:06:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.721 20:06:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:22.721 00:11:22.721 real 0m20.984s 00:11:22.721 user 0m43.225s 00:11:22.721 sys 0m8.471s 00:11:22.721 20:06:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.721 20:06:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.721 ************************************ 00:11:22.721 END TEST nvmf_connect_stress 00:11:22.721 ************************************ 00:11:22.721 20:06:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:22.721 20:06:19 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:22.721 20:06:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:22.721 20:06:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.722 20:06:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:22.722 ************************************ 00:11:22.722 START TEST nvmf_fused_ordering 00:11:22.722 ************************************ 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:22.722 * Looking for test storage... 00:11:22.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:22.722 20:06:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:29.313 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.313 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:29.314 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:29.314 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:29.314 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.314 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:29.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:11:29.575 00:11:29.575 --- 10.0.0.2 ping statistics --- 00:11:29.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.575 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:11:29.575 00:11:29.575 --- 10.0.0.1 ping statistics --- 00:11:29.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.575 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=873845 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 873845 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 873845 ']' 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.575 20:06:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:29.576 20:06:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.576 20:06:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:29.576 20:06:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:29.837 [2024-07-15 20:06:27.043061] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:11:29.837 [2024-07-15 20:06:27.043139] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.837 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.837 [2024-07-15 20:06:27.130751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.837 [2024-07-15 20:06:27.222180] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.837 [2024-07-15 20:06:27.222237] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.837 [2024-07-15 20:06:27.222245] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.837 [2024-07-15 20:06:27.222252] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.837 [2024-07-15 20:06:27.222258] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.837 [2024-07-15 20:06:27.222284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.409 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:30.409 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:30.409 20:06:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:30.409 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:30.409 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.671 [2024-07-15 20:06:27.879261] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.671 [2024-07-15 20:06:27.903470] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.671 NULL1 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.671 20:06:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:30.671 [2024-07-15 20:06:27.973326] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:11:30.671 [2024-07-15 20:06:27.973373] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874068 ] 00:11:30.671 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.243 Attached to nqn.2016-06.io.spdk:cnode1 00:11:31.243 Namespace ID: 1 size: 1GB 00:11:31.243 fused_ordering(0) 00:11:31.243 fused_ordering(1) 00:11:31.243 fused_ordering(2) 00:11:31.243 fused_ordering(3) 00:11:31.243 fused_ordering(4) 00:11:31.243 fused_ordering(5) 00:11:31.243 fused_ordering(6) 00:11:31.243 fused_ordering(7) 00:11:31.243 fused_ordering(8) 00:11:31.243 fused_ordering(9) 00:11:31.243 fused_ordering(10) 00:11:31.243 fused_ordering(11) 00:11:31.243 fused_ordering(12) 00:11:31.243 fused_ordering(13) 00:11:31.243 fused_ordering(14) 00:11:31.243 fused_ordering(15) 00:11:31.243 fused_ordering(16) 00:11:31.243 fused_ordering(17) 00:11:31.243 fused_ordering(18) 00:11:31.243 fused_ordering(19) 00:11:31.243 fused_ordering(20) 00:11:31.243 fused_ordering(21) 00:11:31.243 fused_ordering(22) 00:11:31.243 fused_ordering(23) 00:11:31.243 fused_ordering(24) 00:11:31.243 fused_ordering(25) 00:11:31.243 fused_ordering(26) 00:11:31.243 fused_ordering(27) 00:11:31.243 fused_ordering(28) 00:11:31.243 fused_ordering(29) 00:11:31.243 fused_ordering(30) 00:11:31.243 fused_ordering(31) 00:11:31.243 fused_ordering(32) 00:11:31.243 fused_ordering(33) 00:11:31.243 fused_ordering(34) 00:11:31.243 fused_ordering(35) 00:11:31.243 fused_ordering(36) 00:11:31.243 fused_ordering(37) 00:11:31.243 fused_ordering(38) 00:11:31.243 fused_ordering(39) 00:11:31.243 fused_ordering(40) 00:11:31.243 fused_ordering(41) 00:11:31.243 fused_ordering(42) 00:11:31.243 fused_ordering(43) 00:11:31.243 fused_ordering(44) 00:11:31.243 fused_ordering(45) 00:11:31.243 fused_ordering(46) 00:11:31.243 fused_ordering(47) 00:11:31.243 fused_ordering(48) 00:11:31.243 fused_ordering(49) 00:11:31.243 fused_ordering(50) 00:11:31.243 fused_ordering(51) 00:11:31.243 fused_ordering(52) 00:11:31.243 fused_ordering(53) 00:11:31.243 fused_ordering(54) 00:11:31.243 fused_ordering(55) 00:11:31.243 fused_ordering(56) 00:11:31.243 fused_ordering(57) 00:11:31.243 fused_ordering(58) 00:11:31.243 fused_ordering(59) 00:11:31.243 fused_ordering(60) 00:11:31.243 fused_ordering(61) 00:11:31.243 fused_ordering(62) 00:11:31.243 fused_ordering(63) 00:11:31.243 fused_ordering(64) 00:11:31.243 fused_ordering(65) 00:11:31.243 fused_ordering(66) 00:11:31.243 fused_ordering(67) 00:11:31.243 fused_ordering(68) 00:11:31.243 fused_ordering(69) 00:11:31.243 fused_ordering(70) 00:11:31.243 fused_ordering(71) 00:11:31.243 fused_ordering(72) 00:11:31.243 fused_ordering(73) 00:11:31.243 fused_ordering(74) 00:11:31.243 fused_ordering(75) 00:11:31.243 fused_ordering(76) 00:11:31.243 fused_ordering(77) 00:11:31.243 fused_ordering(78) 00:11:31.243 fused_ordering(79) 00:11:31.243 fused_ordering(80) 00:11:31.243 fused_ordering(81) 00:11:31.243 fused_ordering(82) 00:11:31.243 fused_ordering(83) 00:11:31.243 fused_ordering(84) 00:11:31.243 fused_ordering(85) 00:11:31.243 fused_ordering(86) 00:11:31.243 fused_ordering(87) 00:11:31.243 fused_ordering(88) 00:11:31.243 fused_ordering(89) 00:11:31.243 fused_ordering(90) 00:11:31.243 fused_ordering(91) 00:11:31.243 fused_ordering(92) 00:11:31.243 fused_ordering(93) 00:11:31.243 fused_ordering(94) 00:11:31.243 fused_ordering(95) 00:11:31.243 fused_ordering(96) 00:11:31.243 fused_ordering(97) 00:11:31.243 fused_ordering(98) 00:11:31.243 fused_ordering(99) 00:11:31.243 fused_ordering(100) 00:11:31.243 fused_ordering(101) 00:11:31.243 fused_ordering(102) 00:11:31.243 fused_ordering(103) 00:11:31.243 fused_ordering(104) 00:11:31.243 fused_ordering(105) 00:11:31.243 fused_ordering(106) 00:11:31.243 fused_ordering(107) 00:11:31.243 fused_ordering(108) 00:11:31.243 fused_ordering(109) 00:11:31.243 fused_ordering(110) 00:11:31.243 fused_ordering(111) 00:11:31.243 fused_ordering(112) 00:11:31.243 fused_ordering(113) 00:11:31.243 fused_ordering(114) 00:11:31.243 fused_ordering(115) 00:11:31.243 fused_ordering(116) 00:11:31.243 fused_ordering(117) 00:11:31.243 fused_ordering(118) 00:11:31.243 fused_ordering(119) 00:11:31.243 fused_ordering(120) 00:11:31.243 fused_ordering(121) 00:11:31.243 fused_ordering(122) 00:11:31.243 fused_ordering(123) 00:11:31.243 fused_ordering(124) 00:11:31.243 fused_ordering(125) 00:11:31.243 fused_ordering(126) 00:11:31.243 fused_ordering(127) 00:11:31.243 fused_ordering(128) 00:11:31.243 fused_ordering(129) 00:11:31.243 fused_ordering(130) 00:11:31.243 fused_ordering(131) 00:11:31.243 fused_ordering(132) 00:11:31.243 fused_ordering(133) 00:11:31.243 fused_ordering(134) 00:11:31.243 fused_ordering(135) 00:11:31.243 fused_ordering(136) 00:11:31.243 fused_ordering(137) 00:11:31.243 fused_ordering(138) 00:11:31.243 fused_ordering(139) 00:11:31.243 fused_ordering(140) 00:11:31.243 fused_ordering(141) 00:11:31.243 fused_ordering(142) 00:11:31.243 fused_ordering(143) 00:11:31.243 fused_ordering(144) 00:11:31.243 fused_ordering(145) 00:11:31.243 fused_ordering(146) 00:11:31.243 fused_ordering(147) 00:11:31.243 fused_ordering(148) 00:11:31.243 fused_ordering(149) 00:11:31.243 fused_ordering(150) 00:11:31.243 fused_ordering(151) 00:11:31.243 fused_ordering(152) 00:11:31.243 fused_ordering(153) 00:11:31.243 fused_ordering(154) 00:11:31.243 fused_ordering(155) 00:11:31.243 fused_ordering(156) 00:11:31.243 fused_ordering(157) 00:11:31.243 fused_ordering(158) 00:11:31.243 fused_ordering(159) 00:11:31.243 fused_ordering(160) 00:11:31.243 fused_ordering(161) 00:11:31.243 fused_ordering(162) 00:11:31.243 fused_ordering(163) 00:11:31.243 fused_ordering(164) 00:11:31.243 fused_ordering(165) 00:11:31.243 fused_ordering(166) 00:11:31.243 fused_ordering(167) 00:11:31.243 fused_ordering(168) 00:11:31.243 fused_ordering(169) 00:11:31.243 fused_ordering(170) 00:11:31.243 fused_ordering(171) 00:11:31.243 fused_ordering(172) 00:11:31.243 fused_ordering(173) 00:11:31.243 fused_ordering(174) 00:11:31.243 fused_ordering(175) 00:11:31.243 fused_ordering(176) 00:11:31.243 fused_ordering(177) 00:11:31.243 fused_ordering(178) 00:11:31.243 fused_ordering(179) 00:11:31.243 fused_ordering(180) 00:11:31.243 fused_ordering(181) 00:11:31.243 fused_ordering(182) 00:11:31.243 fused_ordering(183) 00:11:31.243 fused_ordering(184) 00:11:31.244 fused_ordering(185) 00:11:31.244 fused_ordering(186) 00:11:31.244 fused_ordering(187) 00:11:31.244 fused_ordering(188) 00:11:31.244 fused_ordering(189) 00:11:31.244 fused_ordering(190) 00:11:31.244 fused_ordering(191) 00:11:31.244 fused_ordering(192) 00:11:31.244 fused_ordering(193) 00:11:31.244 fused_ordering(194) 00:11:31.244 fused_ordering(195) 00:11:31.244 fused_ordering(196) 00:11:31.244 fused_ordering(197) 00:11:31.244 fused_ordering(198) 00:11:31.244 fused_ordering(199) 00:11:31.244 fused_ordering(200) 00:11:31.244 fused_ordering(201) 00:11:31.244 fused_ordering(202) 00:11:31.244 fused_ordering(203) 00:11:31.244 fused_ordering(204) 00:11:31.244 fused_ordering(205) 00:11:31.814 fused_ordering(206) 00:11:31.814 fused_ordering(207) 00:11:31.814 fused_ordering(208) 00:11:31.814 fused_ordering(209) 00:11:31.814 fused_ordering(210) 00:11:31.814 fused_ordering(211) 00:11:31.814 fused_ordering(212) 00:11:31.814 fused_ordering(213) 00:11:31.814 fused_ordering(214) 00:11:31.814 fused_ordering(215) 00:11:31.814 fused_ordering(216) 00:11:31.814 fused_ordering(217) 00:11:31.814 fused_ordering(218) 00:11:31.814 fused_ordering(219) 00:11:31.814 fused_ordering(220) 00:11:31.814 fused_ordering(221) 00:11:31.814 fused_ordering(222) 00:11:31.814 fused_ordering(223) 00:11:31.814 fused_ordering(224) 00:11:31.814 fused_ordering(225) 00:11:31.814 fused_ordering(226) 00:11:31.814 fused_ordering(227) 00:11:31.814 fused_ordering(228) 00:11:31.814 fused_ordering(229) 00:11:31.814 fused_ordering(230) 00:11:31.814 fused_ordering(231) 00:11:31.814 fused_ordering(232) 00:11:31.814 fused_ordering(233) 00:11:31.814 fused_ordering(234) 00:11:31.814 fused_ordering(235) 00:11:31.814 fused_ordering(236) 00:11:31.814 fused_ordering(237) 00:11:31.814 fused_ordering(238) 00:11:31.814 fused_ordering(239) 00:11:31.814 fused_ordering(240) 00:11:31.814 fused_ordering(241) 00:11:31.814 fused_ordering(242) 00:11:31.814 fused_ordering(243) 00:11:31.814 fused_ordering(244) 00:11:31.814 fused_ordering(245) 00:11:31.814 fused_ordering(246) 00:11:31.814 fused_ordering(247) 00:11:31.814 fused_ordering(248) 00:11:31.814 fused_ordering(249) 00:11:31.814 fused_ordering(250) 00:11:31.814 fused_ordering(251) 00:11:31.814 fused_ordering(252) 00:11:31.814 fused_ordering(253) 00:11:31.814 fused_ordering(254) 00:11:31.814 fused_ordering(255) 00:11:31.814 fused_ordering(256) 00:11:31.814 fused_ordering(257) 00:11:31.814 fused_ordering(258) 00:11:31.814 fused_ordering(259) 00:11:31.814 fused_ordering(260) 00:11:31.814 fused_ordering(261) 00:11:31.814 fused_ordering(262) 00:11:31.814 fused_ordering(263) 00:11:31.814 fused_ordering(264) 00:11:31.814 fused_ordering(265) 00:11:31.814 fused_ordering(266) 00:11:31.814 fused_ordering(267) 00:11:31.814 fused_ordering(268) 00:11:31.814 fused_ordering(269) 00:11:31.814 fused_ordering(270) 00:11:31.814 fused_ordering(271) 00:11:31.814 fused_ordering(272) 00:11:31.814 fused_ordering(273) 00:11:31.814 fused_ordering(274) 00:11:31.814 fused_ordering(275) 00:11:31.814 fused_ordering(276) 00:11:31.814 fused_ordering(277) 00:11:31.814 fused_ordering(278) 00:11:31.814 fused_ordering(279) 00:11:31.814 fused_ordering(280) 00:11:31.814 fused_ordering(281) 00:11:31.814 fused_ordering(282) 00:11:31.814 fused_ordering(283) 00:11:31.814 fused_ordering(284) 00:11:31.814 fused_ordering(285) 00:11:31.814 fused_ordering(286) 00:11:31.814 fused_ordering(287) 00:11:31.814 fused_ordering(288) 00:11:31.814 fused_ordering(289) 00:11:31.814 fused_ordering(290) 00:11:31.814 fused_ordering(291) 00:11:31.814 fused_ordering(292) 00:11:31.814 fused_ordering(293) 00:11:31.814 fused_ordering(294) 00:11:31.814 fused_ordering(295) 00:11:31.814 fused_ordering(296) 00:11:31.814 fused_ordering(297) 00:11:31.814 fused_ordering(298) 00:11:31.814 fused_ordering(299) 00:11:31.814 fused_ordering(300) 00:11:31.814 fused_ordering(301) 00:11:31.814 fused_ordering(302) 00:11:31.814 fused_ordering(303) 00:11:31.814 fused_ordering(304) 00:11:31.814 fused_ordering(305) 00:11:31.814 fused_ordering(306) 00:11:31.814 fused_ordering(307) 00:11:31.814 fused_ordering(308) 00:11:31.814 fused_ordering(309) 00:11:31.814 fused_ordering(310) 00:11:31.814 fused_ordering(311) 00:11:31.814 fused_ordering(312) 00:11:31.814 fused_ordering(313) 00:11:31.814 fused_ordering(314) 00:11:31.814 fused_ordering(315) 00:11:31.814 fused_ordering(316) 00:11:31.814 fused_ordering(317) 00:11:31.814 fused_ordering(318) 00:11:31.814 fused_ordering(319) 00:11:31.814 fused_ordering(320) 00:11:31.814 fused_ordering(321) 00:11:31.814 fused_ordering(322) 00:11:31.814 fused_ordering(323) 00:11:31.814 fused_ordering(324) 00:11:31.814 fused_ordering(325) 00:11:31.814 fused_ordering(326) 00:11:31.814 fused_ordering(327) 00:11:31.814 fused_ordering(328) 00:11:31.814 fused_ordering(329) 00:11:31.814 fused_ordering(330) 00:11:31.814 fused_ordering(331) 00:11:31.814 fused_ordering(332) 00:11:31.814 fused_ordering(333) 00:11:31.814 fused_ordering(334) 00:11:31.814 fused_ordering(335) 00:11:31.814 fused_ordering(336) 00:11:31.814 fused_ordering(337) 00:11:31.814 fused_ordering(338) 00:11:31.814 fused_ordering(339) 00:11:31.814 fused_ordering(340) 00:11:31.814 fused_ordering(341) 00:11:31.814 fused_ordering(342) 00:11:31.814 fused_ordering(343) 00:11:31.814 fused_ordering(344) 00:11:31.814 fused_ordering(345) 00:11:31.814 fused_ordering(346) 00:11:31.814 fused_ordering(347) 00:11:31.814 fused_ordering(348) 00:11:31.814 fused_ordering(349) 00:11:31.814 fused_ordering(350) 00:11:31.814 fused_ordering(351) 00:11:31.814 fused_ordering(352) 00:11:31.814 fused_ordering(353) 00:11:31.814 fused_ordering(354) 00:11:31.814 fused_ordering(355) 00:11:31.814 fused_ordering(356) 00:11:31.814 fused_ordering(357) 00:11:31.814 fused_ordering(358) 00:11:31.814 fused_ordering(359) 00:11:31.814 fused_ordering(360) 00:11:31.814 fused_ordering(361) 00:11:31.814 fused_ordering(362) 00:11:31.814 fused_ordering(363) 00:11:31.814 fused_ordering(364) 00:11:31.814 fused_ordering(365) 00:11:31.814 fused_ordering(366) 00:11:31.814 fused_ordering(367) 00:11:31.814 fused_ordering(368) 00:11:31.814 fused_ordering(369) 00:11:31.814 fused_ordering(370) 00:11:31.814 fused_ordering(371) 00:11:31.814 fused_ordering(372) 00:11:31.814 fused_ordering(373) 00:11:31.814 fused_ordering(374) 00:11:31.814 fused_ordering(375) 00:11:31.814 fused_ordering(376) 00:11:31.814 fused_ordering(377) 00:11:31.814 fused_ordering(378) 00:11:31.814 fused_ordering(379) 00:11:31.814 fused_ordering(380) 00:11:31.814 fused_ordering(381) 00:11:31.814 fused_ordering(382) 00:11:31.814 fused_ordering(383) 00:11:31.814 fused_ordering(384) 00:11:31.814 fused_ordering(385) 00:11:31.814 fused_ordering(386) 00:11:31.814 fused_ordering(387) 00:11:31.814 fused_ordering(388) 00:11:31.814 fused_ordering(389) 00:11:31.814 fused_ordering(390) 00:11:31.814 fused_ordering(391) 00:11:31.814 fused_ordering(392) 00:11:31.814 fused_ordering(393) 00:11:31.814 fused_ordering(394) 00:11:31.814 fused_ordering(395) 00:11:31.814 fused_ordering(396) 00:11:31.814 fused_ordering(397) 00:11:31.814 fused_ordering(398) 00:11:31.814 fused_ordering(399) 00:11:31.814 fused_ordering(400) 00:11:31.814 fused_ordering(401) 00:11:31.814 fused_ordering(402) 00:11:31.814 fused_ordering(403) 00:11:31.814 fused_ordering(404) 00:11:31.814 fused_ordering(405) 00:11:31.814 fused_ordering(406) 00:11:31.814 fused_ordering(407) 00:11:31.814 fused_ordering(408) 00:11:31.814 fused_ordering(409) 00:11:31.814 fused_ordering(410) 00:11:32.382 fused_ordering(411) 00:11:32.382 fused_ordering(412) 00:11:32.382 fused_ordering(413) 00:11:32.382 fused_ordering(414) 00:11:32.382 fused_ordering(415) 00:11:32.382 fused_ordering(416) 00:11:32.382 fused_ordering(417) 00:11:32.382 fused_ordering(418) 00:11:32.382 fused_ordering(419) 00:11:32.382 fused_ordering(420) 00:11:32.382 fused_ordering(421) 00:11:32.382 fused_ordering(422) 00:11:32.382 fused_ordering(423) 00:11:32.382 fused_ordering(424) 00:11:32.382 fused_ordering(425) 00:11:32.382 fused_ordering(426) 00:11:32.382 fused_ordering(427) 00:11:32.382 fused_ordering(428) 00:11:32.382 fused_ordering(429) 00:11:32.382 fused_ordering(430) 00:11:32.382 fused_ordering(431) 00:11:32.382 fused_ordering(432) 00:11:32.382 fused_ordering(433) 00:11:32.382 fused_ordering(434) 00:11:32.382 fused_ordering(435) 00:11:32.382 fused_ordering(436) 00:11:32.382 fused_ordering(437) 00:11:32.382 fused_ordering(438) 00:11:32.382 fused_ordering(439) 00:11:32.382 fused_ordering(440) 00:11:32.382 fused_ordering(441) 00:11:32.382 fused_ordering(442) 00:11:32.382 fused_ordering(443) 00:11:32.382 fused_ordering(444) 00:11:32.382 fused_ordering(445) 00:11:32.382 fused_ordering(446) 00:11:32.382 fused_ordering(447) 00:11:32.382 fused_ordering(448) 00:11:32.382 fused_ordering(449) 00:11:32.382 fused_ordering(450) 00:11:32.382 fused_ordering(451) 00:11:32.382 fused_ordering(452) 00:11:32.382 fused_ordering(453) 00:11:32.382 fused_ordering(454) 00:11:32.382 fused_ordering(455) 00:11:32.382 fused_ordering(456) 00:11:32.382 fused_ordering(457) 00:11:32.382 fused_ordering(458) 00:11:32.382 fused_ordering(459) 00:11:32.382 fused_ordering(460) 00:11:32.382 fused_ordering(461) 00:11:32.382 fused_ordering(462) 00:11:32.382 fused_ordering(463) 00:11:32.382 fused_ordering(464) 00:11:32.382 fused_ordering(465) 00:11:32.382 fused_ordering(466) 00:11:32.382 fused_ordering(467) 00:11:32.382 fused_ordering(468) 00:11:32.382 fused_ordering(469) 00:11:32.382 fused_ordering(470) 00:11:32.382 fused_ordering(471) 00:11:32.382 fused_ordering(472) 00:11:32.382 fused_ordering(473) 00:11:32.382 fused_ordering(474) 00:11:32.382 fused_ordering(475) 00:11:32.382 fused_ordering(476) 00:11:32.382 fused_ordering(477) 00:11:32.382 fused_ordering(478) 00:11:32.382 fused_ordering(479) 00:11:32.382 fused_ordering(480) 00:11:32.382 fused_ordering(481) 00:11:32.382 fused_ordering(482) 00:11:32.382 fused_ordering(483) 00:11:32.382 fused_ordering(484) 00:11:32.382 fused_ordering(485) 00:11:32.382 fused_ordering(486) 00:11:32.382 fused_ordering(487) 00:11:32.382 fused_ordering(488) 00:11:32.382 fused_ordering(489) 00:11:32.382 fused_ordering(490) 00:11:32.382 fused_ordering(491) 00:11:32.382 fused_ordering(492) 00:11:32.382 fused_ordering(493) 00:11:32.382 fused_ordering(494) 00:11:32.382 fused_ordering(495) 00:11:32.382 fused_ordering(496) 00:11:32.382 fused_ordering(497) 00:11:32.382 fused_ordering(498) 00:11:32.382 fused_ordering(499) 00:11:32.382 fused_ordering(500) 00:11:32.382 fused_ordering(501) 00:11:32.382 fused_ordering(502) 00:11:32.382 fused_ordering(503) 00:11:32.382 fused_ordering(504) 00:11:32.382 fused_ordering(505) 00:11:32.382 fused_ordering(506) 00:11:32.382 fused_ordering(507) 00:11:32.382 fused_ordering(508) 00:11:32.382 fused_ordering(509) 00:11:32.382 fused_ordering(510) 00:11:32.382 fused_ordering(511) 00:11:32.382 fused_ordering(512) 00:11:32.382 fused_ordering(513) 00:11:32.382 fused_ordering(514) 00:11:32.382 fused_ordering(515) 00:11:32.382 fused_ordering(516) 00:11:32.382 fused_ordering(517) 00:11:32.382 fused_ordering(518) 00:11:32.382 fused_ordering(519) 00:11:32.382 fused_ordering(520) 00:11:32.382 fused_ordering(521) 00:11:32.382 fused_ordering(522) 00:11:32.382 fused_ordering(523) 00:11:32.382 fused_ordering(524) 00:11:32.382 fused_ordering(525) 00:11:32.382 fused_ordering(526) 00:11:32.382 fused_ordering(527) 00:11:32.382 fused_ordering(528) 00:11:32.382 fused_ordering(529) 00:11:32.382 fused_ordering(530) 00:11:32.382 fused_ordering(531) 00:11:32.382 fused_ordering(532) 00:11:32.382 fused_ordering(533) 00:11:32.382 fused_ordering(534) 00:11:32.382 fused_ordering(535) 00:11:32.382 fused_ordering(536) 00:11:32.382 fused_ordering(537) 00:11:32.382 fused_ordering(538) 00:11:32.382 fused_ordering(539) 00:11:32.382 fused_ordering(540) 00:11:32.382 fused_ordering(541) 00:11:32.382 fused_ordering(542) 00:11:32.382 fused_ordering(543) 00:11:32.382 fused_ordering(544) 00:11:32.382 fused_ordering(545) 00:11:32.382 fused_ordering(546) 00:11:32.382 fused_ordering(547) 00:11:32.382 fused_ordering(548) 00:11:32.382 fused_ordering(549) 00:11:32.382 fused_ordering(550) 00:11:32.382 fused_ordering(551) 00:11:32.382 fused_ordering(552) 00:11:32.382 fused_ordering(553) 00:11:32.382 fused_ordering(554) 00:11:32.382 fused_ordering(555) 00:11:32.382 fused_ordering(556) 00:11:32.382 fused_ordering(557) 00:11:32.382 fused_ordering(558) 00:11:32.382 fused_ordering(559) 00:11:32.382 fused_ordering(560) 00:11:32.382 fused_ordering(561) 00:11:32.382 fused_ordering(562) 00:11:32.382 fused_ordering(563) 00:11:32.382 fused_ordering(564) 00:11:32.382 fused_ordering(565) 00:11:32.382 fused_ordering(566) 00:11:32.382 fused_ordering(567) 00:11:32.382 fused_ordering(568) 00:11:32.382 fused_ordering(569) 00:11:32.382 fused_ordering(570) 00:11:32.382 fused_ordering(571) 00:11:32.382 fused_ordering(572) 00:11:32.382 fused_ordering(573) 00:11:32.382 fused_ordering(574) 00:11:32.382 fused_ordering(575) 00:11:32.382 fused_ordering(576) 00:11:32.382 fused_ordering(577) 00:11:32.382 fused_ordering(578) 00:11:32.382 fused_ordering(579) 00:11:32.382 fused_ordering(580) 00:11:32.382 fused_ordering(581) 00:11:32.382 fused_ordering(582) 00:11:32.382 fused_ordering(583) 00:11:32.382 fused_ordering(584) 00:11:32.382 fused_ordering(585) 00:11:32.382 fused_ordering(586) 00:11:32.382 fused_ordering(587) 00:11:32.382 fused_ordering(588) 00:11:32.382 fused_ordering(589) 00:11:32.382 fused_ordering(590) 00:11:32.382 fused_ordering(591) 00:11:32.382 fused_ordering(592) 00:11:32.382 fused_ordering(593) 00:11:32.382 fused_ordering(594) 00:11:32.382 fused_ordering(595) 00:11:32.382 fused_ordering(596) 00:11:32.382 fused_ordering(597) 00:11:32.382 fused_ordering(598) 00:11:32.382 fused_ordering(599) 00:11:32.382 fused_ordering(600) 00:11:32.382 fused_ordering(601) 00:11:32.382 fused_ordering(602) 00:11:32.382 fused_ordering(603) 00:11:32.382 fused_ordering(604) 00:11:32.382 fused_ordering(605) 00:11:32.382 fused_ordering(606) 00:11:32.382 fused_ordering(607) 00:11:32.382 fused_ordering(608) 00:11:32.382 fused_ordering(609) 00:11:32.382 fused_ordering(610) 00:11:32.382 fused_ordering(611) 00:11:32.382 fused_ordering(612) 00:11:32.382 fused_ordering(613) 00:11:32.382 fused_ordering(614) 00:11:32.382 fused_ordering(615) 00:11:32.951 fused_ordering(616) 00:11:32.951 fused_ordering(617) 00:11:32.951 fused_ordering(618) 00:11:32.951 fused_ordering(619) 00:11:32.951 fused_ordering(620) 00:11:32.951 fused_ordering(621) 00:11:32.951 fused_ordering(622) 00:11:32.951 fused_ordering(623) 00:11:32.951 fused_ordering(624) 00:11:32.951 fused_ordering(625) 00:11:32.951 fused_ordering(626) 00:11:32.951 fused_ordering(627) 00:11:32.951 fused_ordering(628) 00:11:32.951 fused_ordering(629) 00:11:32.951 fused_ordering(630) 00:11:32.951 fused_ordering(631) 00:11:32.951 fused_ordering(632) 00:11:32.951 fused_ordering(633) 00:11:32.951 fused_ordering(634) 00:11:32.951 fused_ordering(635) 00:11:32.951 fused_ordering(636) 00:11:32.951 fused_ordering(637) 00:11:32.951 fused_ordering(638) 00:11:32.951 fused_ordering(639) 00:11:32.951 fused_ordering(640) 00:11:32.951 fused_ordering(641) 00:11:32.951 fused_ordering(642) 00:11:32.951 fused_ordering(643) 00:11:32.951 fused_ordering(644) 00:11:32.951 fused_ordering(645) 00:11:32.951 fused_ordering(646) 00:11:32.951 fused_ordering(647) 00:11:32.951 fused_ordering(648) 00:11:32.951 fused_ordering(649) 00:11:32.951 fused_ordering(650) 00:11:32.951 fused_ordering(651) 00:11:32.951 fused_ordering(652) 00:11:32.951 fused_ordering(653) 00:11:32.951 fused_ordering(654) 00:11:32.951 fused_ordering(655) 00:11:32.951 fused_ordering(656) 00:11:32.951 fused_ordering(657) 00:11:32.951 fused_ordering(658) 00:11:32.951 fused_ordering(659) 00:11:32.951 fused_ordering(660) 00:11:32.951 fused_ordering(661) 00:11:32.951 fused_ordering(662) 00:11:32.951 fused_ordering(663) 00:11:32.951 fused_ordering(664) 00:11:32.951 fused_ordering(665) 00:11:32.951 fused_ordering(666) 00:11:32.951 fused_ordering(667) 00:11:32.951 fused_ordering(668) 00:11:32.951 fused_ordering(669) 00:11:32.951 fused_ordering(670) 00:11:32.951 fused_ordering(671) 00:11:32.951 fused_ordering(672) 00:11:32.951 fused_ordering(673) 00:11:32.951 fused_ordering(674) 00:11:32.951 fused_ordering(675) 00:11:32.951 fused_ordering(676) 00:11:32.951 fused_ordering(677) 00:11:32.951 fused_ordering(678) 00:11:32.951 fused_ordering(679) 00:11:32.951 fused_ordering(680) 00:11:32.951 fused_ordering(681) 00:11:32.951 fused_ordering(682) 00:11:32.951 fused_ordering(683) 00:11:32.951 fused_ordering(684) 00:11:32.951 fused_ordering(685) 00:11:32.951 fused_ordering(686) 00:11:32.951 fused_ordering(687) 00:11:32.951 fused_ordering(688) 00:11:32.951 fused_ordering(689) 00:11:32.951 fused_ordering(690) 00:11:32.951 fused_ordering(691) 00:11:32.951 fused_ordering(692) 00:11:32.951 fused_ordering(693) 00:11:32.951 fused_ordering(694) 00:11:32.951 fused_ordering(695) 00:11:32.951 fused_ordering(696) 00:11:32.951 fused_ordering(697) 00:11:32.951 fused_ordering(698) 00:11:32.951 fused_ordering(699) 00:11:32.951 fused_ordering(700) 00:11:32.951 fused_ordering(701) 00:11:32.951 fused_ordering(702) 00:11:32.951 fused_ordering(703) 00:11:32.951 fused_ordering(704) 00:11:32.951 fused_ordering(705) 00:11:32.951 fused_ordering(706) 00:11:32.951 fused_ordering(707) 00:11:32.951 fused_ordering(708) 00:11:32.951 fused_ordering(709) 00:11:32.951 fused_ordering(710) 00:11:32.951 fused_ordering(711) 00:11:32.951 fused_ordering(712) 00:11:32.951 fused_ordering(713) 00:11:32.951 fused_ordering(714) 00:11:32.951 fused_ordering(715) 00:11:32.951 fused_ordering(716) 00:11:32.951 fused_ordering(717) 00:11:32.951 fused_ordering(718) 00:11:32.951 fused_ordering(719) 00:11:32.951 fused_ordering(720) 00:11:32.951 fused_ordering(721) 00:11:32.951 fused_ordering(722) 00:11:32.951 fused_ordering(723) 00:11:32.951 fused_ordering(724) 00:11:32.951 fused_ordering(725) 00:11:32.951 fused_ordering(726) 00:11:32.951 fused_ordering(727) 00:11:32.951 fused_ordering(728) 00:11:32.951 fused_ordering(729) 00:11:32.951 fused_ordering(730) 00:11:32.951 fused_ordering(731) 00:11:32.951 fused_ordering(732) 00:11:32.951 fused_ordering(733) 00:11:32.951 fused_ordering(734) 00:11:32.951 fused_ordering(735) 00:11:32.951 fused_ordering(736) 00:11:32.951 fused_ordering(737) 00:11:32.951 fused_ordering(738) 00:11:32.951 fused_ordering(739) 00:11:32.951 fused_ordering(740) 00:11:32.951 fused_ordering(741) 00:11:32.951 fused_ordering(742) 00:11:32.951 fused_ordering(743) 00:11:32.951 fused_ordering(744) 00:11:32.951 fused_ordering(745) 00:11:32.951 fused_ordering(746) 00:11:32.951 fused_ordering(747) 00:11:32.951 fused_ordering(748) 00:11:32.951 fused_ordering(749) 00:11:32.951 fused_ordering(750) 00:11:32.951 fused_ordering(751) 00:11:32.951 fused_ordering(752) 00:11:32.951 fused_ordering(753) 00:11:32.951 fused_ordering(754) 00:11:32.951 fused_ordering(755) 00:11:32.951 fused_ordering(756) 00:11:32.951 fused_ordering(757) 00:11:32.951 fused_ordering(758) 00:11:32.951 fused_ordering(759) 00:11:32.951 fused_ordering(760) 00:11:32.951 fused_ordering(761) 00:11:32.951 fused_ordering(762) 00:11:32.951 fused_ordering(763) 00:11:32.951 fused_ordering(764) 00:11:32.951 fused_ordering(765) 00:11:32.951 fused_ordering(766) 00:11:32.951 fused_ordering(767) 00:11:32.951 fused_ordering(768) 00:11:32.951 fused_ordering(769) 00:11:32.951 fused_ordering(770) 00:11:32.951 fused_ordering(771) 00:11:32.951 fused_ordering(772) 00:11:32.951 fused_ordering(773) 00:11:32.951 fused_ordering(774) 00:11:32.951 fused_ordering(775) 00:11:32.951 fused_ordering(776) 00:11:32.951 fused_ordering(777) 00:11:32.951 fused_ordering(778) 00:11:32.951 fused_ordering(779) 00:11:32.951 fused_ordering(780) 00:11:32.951 fused_ordering(781) 00:11:32.951 fused_ordering(782) 00:11:32.951 fused_ordering(783) 00:11:32.951 fused_ordering(784) 00:11:32.951 fused_ordering(785) 00:11:32.951 fused_ordering(786) 00:11:32.951 fused_ordering(787) 00:11:32.951 fused_ordering(788) 00:11:32.951 fused_ordering(789) 00:11:32.951 fused_ordering(790) 00:11:32.951 fused_ordering(791) 00:11:32.951 fused_ordering(792) 00:11:32.951 fused_ordering(793) 00:11:32.951 fused_ordering(794) 00:11:32.951 fused_ordering(795) 00:11:32.951 fused_ordering(796) 00:11:32.951 fused_ordering(797) 00:11:32.951 fused_ordering(798) 00:11:32.951 fused_ordering(799) 00:11:32.951 fused_ordering(800) 00:11:32.951 fused_ordering(801) 00:11:32.951 fused_ordering(802) 00:11:32.951 fused_ordering(803) 00:11:32.951 fused_ordering(804) 00:11:32.951 fused_ordering(805) 00:11:32.951 fused_ordering(806) 00:11:32.951 fused_ordering(807) 00:11:32.951 fused_ordering(808) 00:11:32.951 fused_ordering(809) 00:11:32.951 fused_ordering(810) 00:11:32.951 fused_ordering(811) 00:11:32.951 fused_ordering(812) 00:11:32.951 fused_ordering(813) 00:11:32.951 fused_ordering(814) 00:11:32.951 fused_ordering(815) 00:11:32.951 fused_ordering(816) 00:11:32.951 fused_ordering(817) 00:11:32.951 fused_ordering(818) 00:11:32.951 fused_ordering(819) 00:11:32.951 fused_ordering(820) 00:11:33.520 fused_o[2024-07-15 20:06:30.952136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239c150 is same with the state(5) to be set 00:11:33.780 rdering(821) 00:11:33.780 fused_ordering(822) 00:11:33.780 fused_ordering(823) 00:11:33.780 fused_ordering(824) 00:11:33.780 fused_ordering(825) 00:11:33.780 fused_ordering(826) 00:11:33.780 fused_ordering(827) 00:11:33.780 fused_ordering(828) 00:11:33.780 fused_ordering(829) 00:11:33.780 fused_ordering(830) 00:11:33.780 fused_ordering(831) 00:11:33.780 fused_ordering(832) 00:11:33.780 fused_ordering(833) 00:11:33.780 fused_ordering(834) 00:11:33.780 fused_ordering(835) 00:11:33.780 fused_ordering(836) 00:11:33.780 fused_ordering(837) 00:11:33.780 fused_ordering(838) 00:11:33.780 fused_ordering(839) 00:11:33.780 fused_ordering(840) 00:11:33.780 fused_ordering(841) 00:11:33.780 fused_ordering(842) 00:11:33.780 fused_ordering(843) 00:11:33.780 fused_ordering(844) 00:11:33.780 fused_ordering(845) 00:11:33.780 fused_ordering(846) 00:11:33.780 fused_ordering(847) 00:11:33.780 fused_ordering(848) 00:11:33.780 fused_ordering(849) 00:11:33.780 fused_ordering(850) 00:11:33.780 fused_ordering(851) 00:11:33.780 fused_ordering(852) 00:11:33.780 fused_ordering(853) 00:11:33.780 fused_ordering(854) 00:11:33.780 fused_ordering(855) 00:11:33.780 fused_ordering(856) 00:11:33.780 fused_ordering(857) 00:11:33.780 fused_ordering(858) 00:11:33.780 fused_ordering(859) 00:11:33.780 fused_ordering(860) 00:11:33.780 fused_ordering(861) 00:11:33.780 fused_ordering(862) 00:11:33.780 fused_ordering(863) 00:11:33.780 fused_ordering(864) 00:11:33.780 fused_ordering(865) 00:11:33.780 fused_ordering(866) 00:11:33.780 fused_ordering(867) 00:11:33.780 fused_ordering(868) 00:11:33.780 fused_ordering(869) 00:11:33.780 fused_ordering(870) 00:11:33.780 fused_ordering(871) 00:11:33.780 fused_ordering(872) 00:11:33.780 fused_ordering(873) 00:11:33.780 fused_ordering(874) 00:11:33.780 fused_ordering(875) 00:11:33.780 fused_ordering(876) 00:11:33.780 fused_ordering(877) 00:11:33.780 fused_ordering(878) 00:11:33.780 fused_ordering(879) 00:11:33.780 fused_ordering(880) 00:11:33.780 fused_ordering(881) 00:11:33.780 fused_ordering(882) 00:11:33.780 fused_ordering(883) 00:11:33.780 fused_ordering(884) 00:11:33.780 fused_ordering(885) 00:11:33.780 fused_ordering(886) 00:11:33.780 fused_ordering(887) 00:11:33.780 fused_ordering(888) 00:11:33.780 fused_ordering(889) 00:11:33.780 fused_ordering(890) 00:11:33.780 fused_ordering(891) 00:11:33.780 fused_ordering(892) 00:11:33.780 fused_ordering(893) 00:11:33.780 fused_ordering(894) 00:11:33.780 fused_ordering(895) 00:11:33.780 fused_ordering(896) 00:11:33.780 fused_ordering(897) 00:11:33.780 fused_ordering(898) 00:11:33.780 fused_ordering(899) 00:11:33.780 fused_ordering(900) 00:11:33.780 fused_ordering(901) 00:11:33.780 fused_ordering(902) 00:11:33.780 fused_ordering(903) 00:11:33.780 fused_ordering(904) 00:11:33.780 fused_ordering(905) 00:11:33.780 fused_ordering(906) 00:11:33.780 fused_ordering(907) 00:11:33.780 fused_ordering(908) 00:11:33.780 fused_ordering(909) 00:11:33.780 fused_ordering(910) 00:11:33.780 fused_ordering(911) 00:11:33.780 fused_ordering(912) 00:11:33.780 fused_ordering(913) 00:11:33.780 fused_ordering(914) 00:11:33.780 fused_ordering(915) 00:11:33.780 fused_ordering(916) 00:11:33.780 fused_ordering(917) 00:11:33.780 fused_ordering(918) 00:11:33.780 fused_ordering(919) 00:11:33.780 fused_ordering(920) 00:11:33.780 fused_ordering(921) 00:11:33.780 fused_ordering(922) 00:11:33.780 fused_ordering(923) 00:11:33.780 fused_ordering(924) 00:11:33.780 fused_ordering(925) 00:11:33.780 fused_ordering(926) 00:11:33.780 fused_ordering(927) 00:11:33.780 fused_ordering(928) 00:11:33.780 fused_ordering(929) 00:11:33.780 fused_ordering(930) 00:11:33.780 fused_ordering(931) 00:11:33.780 fused_ordering(932) 00:11:33.780 fused_ordering(933) 00:11:33.780 fused_ordering(934) 00:11:33.780 fused_ordering(935) 00:11:33.780 fused_ordering(936) 00:11:33.780 fused_ordering(937) 00:11:33.780 fused_ordering(938) 00:11:33.780 fused_ordering(939) 00:11:33.780 fused_ordering(940) 00:11:33.780 fused_ordering(941) 00:11:33.780 fused_ordering(942) 00:11:33.780 fused_ordering(943) 00:11:33.780 fused_ordering(944) 00:11:33.780 fused_ordering(945) 00:11:33.780 fused_ordering(946) 00:11:33.780 fused_ordering(947) 00:11:33.780 fused_ordering(948) 00:11:33.780 fused_ordering(949) 00:11:33.780 fused_ordering(950) 00:11:33.780 fused_ordering(951) 00:11:33.780 fused_ordering(952) 00:11:33.780 fused_ordering(953) 00:11:33.780 fused_ordering(954) 00:11:33.780 fused_ordering(955) 00:11:33.780 fused_ordering(956) 00:11:33.780 fused_ordering(957) 00:11:33.780 fused_ordering(958) 00:11:33.780 fused_ordering(959) 00:11:33.780 fused_ordering(960) 00:11:33.780 fused_ordering(961) 00:11:33.780 fused_ordering(962) 00:11:33.780 fused_ordering(963) 00:11:33.780 fused_ordering(964) 00:11:33.780 fused_ordering(965) 00:11:33.780 fused_ordering(966) 00:11:33.780 fused_ordering(967) 00:11:33.780 fused_ordering(968) 00:11:33.780 fused_ordering(969) 00:11:33.780 fused_ordering(970) 00:11:33.780 fused_ordering(971) 00:11:33.780 fused_ordering(972) 00:11:33.780 fused_ordering(973) 00:11:33.780 fused_ordering(974) 00:11:33.780 fused_ordering(975) 00:11:33.780 fused_ordering(976) 00:11:33.780 fused_ordering(977) 00:11:33.780 fused_ordering(978) 00:11:33.780 fused_ordering(979) 00:11:33.780 fused_ordering(980) 00:11:33.780 fused_ordering(981) 00:11:33.780 fused_ordering(982) 00:11:33.780 fused_ordering(983) 00:11:33.780 fused_ordering(984) 00:11:33.780 fused_ordering(985) 00:11:33.780 fused_ordering(986) 00:11:33.780 fused_ordering(987) 00:11:33.780 fused_ordering(988) 00:11:33.780 fused_ordering(989) 00:11:33.780 fused_ordering(990) 00:11:33.780 fused_ordering(991) 00:11:33.780 fused_ordering(992) 00:11:33.780 fused_ordering(993) 00:11:33.780 fused_ordering(994) 00:11:33.780 fused_ordering(995) 00:11:33.780 fused_ordering(996) 00:11:33.780 fused_ordering(997) 00:11:33.780 fused_ordering(998) 00:11:33.780 fused_ordering(999) 00:11:33.780 fused_ordering(1000) 00:11:33.780 fused_ordering(1001) 00:11:33.780 fused_ordering(1002) 00:11:33.780 fused_ordering(1003) 00:11:33.780 fused_ordering(1004) 00:11:33.780 fused_ordering(1005) 00:11:33.780 fused_ordering(1006) 00:11:33.780 fused_ordering(1007) 00:11:33.780 fused_ordering(1008) 00:11:33.780 fused_ordering(1009) 00:11:33.780 fused_ordering(1010) 00:11:33.780 fused_ordering(1011) 00:11:33.780 fused_ordering(1012) 00:11:33.780 fused_ordering(1013) 00:11:33.780 fused_ordering(1014) 00:11:33.780 fused_ordering(1015) 00:11:33.780 fused_ordering(1016) 00:11:33.780 fused_ordering(1017) 00:11:33.780 fused_ordering(1018) 00:11:33.780 fused_ordering(1019) 00:11:33.780 fused_ordering(1020) 00:11:33.780 fused_ordering(1021) 00:11:33.780 fused_ordering(1022) 00:11:33.780 fused_ordering(1023) 00:11:33.780 20:06:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:33.780 20:06:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:33.780 20:06:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:33.780 20:06:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:33.780 20:06:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:33.780 20:06:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:33.780 20:06:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:33.780 20:06:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:33.780 rmmod nvme_tcp 00:11:33.780 rmmod nvme_fabrics 00:11:33.780 rmmod nvme_keyring 00:11:33.780 20:06:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:33.780 20:06:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:33.780 20:06:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:33.780 20:06:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 873845 ']' 00:11:33.780 20:06:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 873845 00:11:33.780 20:06:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 873845 ']' 00:11:33.780 20:06:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 873845 00:11:33.780 20:06:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:33.780 20:06:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:33.780 20:06:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 873845 00:11:33.781 20:06:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:33.781 20:06:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:33.781 20:06:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 873845' 00:11:33.781 killing process with pid 873845 00:11:33.781 20:06:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 873845 00:11:33.781 20:06:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 873845 00:11:33.781 20:06:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:33.781 20:06:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:33.781 20:06:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:33.781 20:06:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:33.781 20:06:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:33.781 20:06:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.781 20:06:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:33.781 20:06:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.371 20:06:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:36.371 00:11:36.371 real 0m13.527s 00:11:36.371 user 0m7.474s 00:11:36.371 sys 0m7.353s 00:11:36.371 20:06:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.371 20:06:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:36.371 ************************************ 00:11:36.371 END TEST nvmf_fused_ordering 00:11:36.371 ************************************ 00:11:36.371 20:06:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:36.371 20:06:33 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:36.371 20:06:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:36.371 20:06:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.371 20:06:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:36.371 ************************************ 00:11:36.371 START TEST nvmf_delete_subsystem 00:11:36.371 ************************************ 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:36.371 * Looking for test storage... 00:11:36.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:36.371 20:06:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:42.963 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:42.963 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:42.963 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:42.963 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.963 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:43.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:11:43.225 00:11:43.225 --- 10.0.0.2 ping statistics --- 00:11:43.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.225 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.419 ms 00:11:43.225 00:11:43.225 --- 10.0.0.1 ping statistics --- 00:11:43.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.225 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=878740 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 878740 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 878740 ']' 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.225 20:06:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.485 [2024-07-15 20:06:40.703037] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:11:43.485 [2024-07-15 20:06:40.703104] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.485 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.485 [2024-07-15 20:06:40.775451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:43.485 [2024-07-15 20:06:40.850198] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.485 [2024-07-15 20:06:40.850238] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.485 [2024-07-15 20:06:40.850246] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.485 [2024-07-15 20:06:40.850252] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.485 [2024-07-15 20:06:40.850258] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.486 [2024-07-15 20:06:40.850412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.486 [2024-07-15 20:06:40.850414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.057 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.057 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:44.057 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:44.057 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:44.057 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.318 [2024-07-15 20:06:41.517896] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.318 [2024-07-15 20:06:41.542035] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.318 NULL1 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.318 Delay0 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=879082 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:44.318 20:06:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:44.318 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.318 [2024-07-15 20:06:41.638709] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:46.230 20:06:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.230 20:06:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.230 20:06:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 starting I/O failed: -6 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 starting I/O failed: -6 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 starting I/O failed: -6 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 starting I/O failed: -6 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 starting I/O failed: -6 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 starting I/O failed: -6 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 starting I/O failed: -6 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 starting I/O failed: -6 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 starting I/O failed: -6 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 starting I/O failed: -6 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 starting I/O failed: -6 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 [2024-07-15 20:06:43.764665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5e000 is same with the state(5) to be set 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Write completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.490 Read completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 [2024-07-15 20:06:43.765408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5e5c0 is same with the state(5) to be set 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 Read completed with error (sct=0, sc=8) 00:11:46.491 starting I/O failed: -6 00:11:46.491 Write completed with error (sct=0, sc=8) 00:11:46.491 [2024-07-15 20:06:43.768311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3258000c00 is same with the state(5) to be set 00:11:47.433 [2024-07-15 20:06:44.738638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5fac0 is same with the state(5) to be set 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Write completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Write completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Write completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 [2024-07-15 20:06:44.768052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5e3e0 is same with the state(5) to be set 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Write completed with error (sct=0, sc=8) 00:11:47.433 Write completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Write completed with error (sct=0, sc=8) 00:11:47.433 Write completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Write completed with error (sct=0, sc=8) 00:11:47.433 Write completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Write completed with error (sct=0, sc=8) 00:11:47.433 Write completed with error (sct=0, sc=8) 00:11:47.433 Write completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Write completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.433 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 [2024-07-15 20:06:44.768556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5e7a0 is same with the state(5) to be set 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 [2024-07-15 20:06:44.770043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f325800cfe0 is same with the state(5) to be set 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Write completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 Read completed with error (sct=0, sc=8) 00:11:47.434 [2024-07-15 20:06:44.770543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f325800d740 is same with the state(5) to be set 00:11:47.434 Initializing NVMe Controllers 00:11:47.434 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:47.434 Controller IO queue size 128, less than required. 00:11:47.434 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:47.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:47.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:47.434 Initialization complete. Launching workers. 00:11:47.434 ======================================================== 00:11:47.434 Latency(us) 00:11:47.434 Device Information : IOPS MiB/s Average min max 00:11:47.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.75 0.08 907161.07 760.93 1007913.89 00:11:47.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 179.18 0.09 945673.18 276.30 2001436.89 00:11:47.434 ======================================================== 00:11:47.434 Total : 343.92 0.17 927225.26 276.30 2001436.89 00:11:47.434 00:11:47.434 [2024-07-15 20:06:44.771039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5fac0 (9): Bad file descriptor 00:11:47.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:47.434 20:06:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.434 20:06:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:47.434 20:06:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 879082 00:11:47.434 20:06:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 879082 00:11:48.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (879082) - No such process 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 879082 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 879082 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 879082 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:48.006 [2024-07-15 20:06:45.303813] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=879763 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 879763 00:11:48.006 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:48.006 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.006 [2024-07-15 20:06:45.369356] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:48.578 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:48.578 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 879763 00:11:48.578 20:06:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.150 20:06:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:49.150 20:06:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 879763 00:11:49.150 20:06:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.411 20:06:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:49.411 20:06:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 879763 00:11:49.411 20:06:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.983 20:06:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:49.983 20:06:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 879763 00:11:49.983 20:06:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:50.555 20:06:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:50.555 20:06:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 879763 00:11:50.555 20:06:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:51.126 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:51.126 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 879763 00:11:51.126 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:51.387 Initializing NVMe Controllers 00:11:51.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:51.387 Controller IO queue size 128, less than required. 00:11:51.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:51.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:51.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:51.387 Initialization complete. Launching workers. 00:11:51.387 ======================================================== 00:11:51.387 Latency(us) 00:11:51.387 Device Information : IOPS MiB/s Average min max 00:11:51.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002203.47 1000289.40 1006028.72 00:11:51.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003414.56 1000252.29 1040883.07 00:11:51.387 ======================================================== 00:11:51.387 Total : 256.00 0.12 1002809.01 1000252.29 1040883.07 00:11:51.387 00:11:51.648 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:51.648 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 879763 00:11:51.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (879763) - No such process 00:11:51.648 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 879763 00:11:51.648 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:51.648 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:51.648 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:51.648 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:51.648 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:51.648 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:51.648 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:51.648 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:51.649 rmmod nvme_tcp 00:11:51.649 rmmod nvme_fabrics 00:11:51.649 rmmod nvme_keyring 00:11:51.649 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:51.649 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:51.649 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:51.649 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 878740 ']' 00:11:51.649 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 878740 00:11:51.649 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 878740 ']' 00:11:51.649 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 878740 00:11:51.649 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:51.649 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:51.649 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 878740 00:11:51.649 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:51.649 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:51.649 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 878740' 00:11:51.649 killing process with pid 878740 00:11:51.649 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 878740 00:11:51.649 20:06:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 878740 00:11:51.910 20:06:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:51.910 20:06:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:51.910 20:06:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:51.910 20:06:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:51.910 20:06:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:51.910 20:06:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.910 20:06:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:51.910 20:06:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.823 20:06:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:53.823 00:11:53.823 real 0m17.845s 00:11:53.823 user 0m30.743s 00:11:53.823 sys 0m6.191s 00:11:53.823 20:06:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:53.824 20:06:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:53.824 ************************************ 00:11:53.824 END TEST nvmf_delete_subsystem 00:11:53.824 ************************************ 00:11:53.824 20:06:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:53.824 20:06:51 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:53.824 20:06:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:53.824 20:06:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.824 20:06:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:54.085 ************************************ 00:11:54.085 START TEST nvmf_ns_masking 00:11:54.085 ************************************ 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:54.085 * Looking for test storage... 00:11:54.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f7a42490-231b-417c-9204-1bbf3ed87e5b 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2f0ef640-07c8-4be7-9c7f-427bb921bc98 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=098646d0-11ab-4fc2-9594-00b685a6cb55 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:54.085 20:06:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:00.728 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.728 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:00.729 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:00.729 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:00.729 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.729 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.990 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.990 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.990 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:00.990 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.990 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.990 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:01.250 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:01.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:12:01.250 00:12:01.250 --- 10.0.0.2 ping statistics --- 00:12:01.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.250 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:12:01.250 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:01.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:12:01.250 00:12:01.250 --- 10.0.0.1 ping statistics --- 00:12:01.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.250 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:12:01.250 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.250 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:01.250 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:01.250 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.250 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:01.250 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:01.250 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.251 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:01.251 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:01.251 20:06:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:01.251 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:01.251 20:06:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:01.251 20:06:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:01.251 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=884754 00:12:01.251 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 884754 00:12:01.251 20:06:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:01.251 20:06:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 884754 ']' 00:12:01.251 20:06:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.251 20:06:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.251 20:06:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.251 20:06:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.251 20:06:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:01.251 [2024-07-15 20:06:58.573566] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:12:01.251 [2024-07-15 20:06:58.573643] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.251 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.251 [2024-07-15 20:06:58.639409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.511 [2024-07-15 20:06:58.703547] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.511 [2024-07-15 20:06:58.703580] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.511 [2024-07-15 20:06:58.703587] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.511 [2024-07-15 20:06:58.703594] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.511 [2024-07-15 20:06:58.703599] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.511 [2024-07-15 20:06:58.703616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.111 20:06:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:02.111 20:06:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:02.111 20:06:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:02.111 20:06:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:02.111 20:06:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:02.111 20:06:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.111 20:06:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:02.111 [2024-07-15 20:06:59.486357] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:02.111 20:06:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:02.111 20:06:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:02.111 20:06:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:02.371 Malloc1 00:12:02.371 20:06:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:02.631 Malloc2 00:12:02.632 20:06:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:02.632 20:06:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:02.892 20:07:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.892 [2024-07-15 20:07:00.278214] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.892 20:07:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:02.892 20:07:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 098646d0-11ab-4fc2-9594-00b685a6cb55 -a 10.0.0.2 -s 4420 -i 4 00:12:03.152 20:07:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.152 20:07:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:03.152 20:07:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.152 20:07:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:03.152 20:07:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:05.065 20:07:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:05.065 20:07:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:05.065 20:07:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.065 20:07:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:05.065 20:07:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.065 20:07:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:05.065 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:05.065 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:05.065 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:05.065 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:05.065 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:05.065 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.065 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:05.065 [ 0]:0x1 00:12:05.325 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:05.325 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.325 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=505ab0b5ca86482e920c47ff23a9f381 00:12:05.325 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 505ab0b5ca86482e920c47ff23a9f381 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.325 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:05.325 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:05.325 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.325 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:05.325 [ 0]:0x1 00:12:05.325 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:05.325 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.585 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=505ab0b5ca86482e920c47ff23a9f381 00:12:05.585 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 505ab0b5ca86482e920c47ff23a9f381 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.585 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:05.585 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.585 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:05.585 [ 1]:0x2 00:12:05.585 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:05.586 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.586 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b25b0cb7b4cc4f6dafc020243e92e7c2 00:12:05.586 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b25b0cb7b4cc4f6dafc020243e92e7c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.586 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:05.586 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:05.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.586 20:07:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.846 20:07:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:05.846 20:07:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:05.846 20:07:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 098646d0-11ab-4fc2-9594-00b685a6cb55 -a 10.0.0.2 -s 4420 -i 4 00:12:06.105 20:07:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:06.105 20:07:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:06.105 20:07:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:06.105 20:07:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:06.105 20:07:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:06.105 20:07:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:08.014 20:07:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:08.014 20:07:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:08.014 20:07:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:08.014 20:07:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:08.014 20:07:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:08.014 20:07:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:08.014 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:08.014 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:08.274 [ 0]:0x2 00:12:08.274 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:08.275 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:08.275 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b25b0cb7b4cc4f6dafc020243e92e7c2 00:12:08.275 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b25b0cb7b4cc4f6dafc020243e92e7c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:08.275 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:08.534 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:08.534 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:08.534 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:08.534 [ 0]:0x1 00:12:08.534 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:08.534 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:08.534 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=505ab0b5ca86482e920c47ff23a9f381 00:12:08.534 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 505ab0b5ca86482e920c47ff23a9f381 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:08.534 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:08.534 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:08.534 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:08.534 [ 1]:0x2 00:12:08.534 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:08.534 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:08.534 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b25b0cb7b4cc4f6dafc020243e92e7c2 00:12:08.534 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b25b0cb7b4cc4f6dafc020243e92e7c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:08.534 20:07:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:08.795 [ 0]:0x2 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b25b0cb7b4cc4f6dafc020243e92e7c2 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b25b0cb7b4cc4f6dafc020243e92e7c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:08.795 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.056 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:09.056 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:09.056 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 098646d0-11ab-4fc2-9594-00b685a6cb55 -a 10.0.0.2 -s 4420 -i 4 00:12:09.316 20:07:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:09.316 20:07:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:09.316 20:07:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:09.316 20:07:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:09.316 20:07:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:09.316 20:07:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:11.253 [ 0]:0x1 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=505ab0b5ca86482e920c47ff23a9f381 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 505ab0b5ca86482e920c47ff23a9f381 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.253 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:11.514 [ 1]:0x2 00:12:11.514 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:11.514 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:11.514 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b25b0cb7b4cc4f6dafc020243e92e7c2 00:12:11.514 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b25b0cb7b4cc4f6dafc020243e92e7c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.514 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:11.514 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:11.514 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:11.514 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:11.514 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:11.514 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:11.514 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:11.514 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:11.514 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:11.514 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.514 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:11.514 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:11.514 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:11.797 [ 0]:0x2 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b25b0cb7b4cc4f6dafc020243e92e7c2 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b25b0cb7b4cc4f6dafc020243e92e7c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:11.797 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:11.798 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:11.798 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:11.798 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:11.798 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:11.798 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:11.798 20:07:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:11.798 [2024-07-15 20:07:09.139649] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:11.798 request: 00:12:11.798 { 00:12:11.798 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:11.798 "nsid": 2, 00:12:11.798 "host": "nqn.2016-06.io.spdk:host1", 00:12:11.798 "method": "nvmf_ns_remove_host", 00:12:11.798 "req_id": 1 00:12:11.798 } 00:12:11.798 Got JSON-RPC error response 00:12:11.798 response: 00:12:11.798 { 00:12:11.798 "code": -32602, 00:12:11.798 "message": "Invalid parameters" 00:12:11.798 } 00:12:11.798 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:11.798 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:11.798 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:11.798 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:11.798 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:11.798 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:11.798 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:11.798 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:11.798 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:11.798 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:11.798 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:11.798 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:11.798 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.798 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:11.798 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:11.798 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:12.058 [ 0]:0x2 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b25b0cb7b4cc4f6dafc020243e92e7c2 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b25b0cb7b4cc4f6dafc020243e92e7c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=886936 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 886936 /var/tmp/host.sock 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 886936 ']' 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:12.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:12.058 20:07:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:12.319 [2024-07-15 20:07:09.538696] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:12:12.319 [2024-07-15 20:07:09.538750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886936 ] 00:12:12.319 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.319 [2024-07-15 20:07:09.616255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.319 [2024-07-15 20:07:09.680903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.891 20:07:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:12.891 20:07:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:12.891 20:07:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.152 20:07:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:13.413 20:07:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f7a42490-231b-417c-9204-1bbf3ed87e5b 00:12:13.413 20:07:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:13.413 20:07:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F7A42490231B417C92041BBF3ED87E5B -i 00:12:13.413 20:07:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2f0ef640-07c8-4be7-9c7f-427bb921bc98 00:12:13.413 20:07:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:13.413 20:07:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2F0EF64007C84BE79C7F427BB921BC98 -i 00:12:13.674 20:07:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:13.933 20:07:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:13.933 20:07:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:13.933 20:07:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:14.193 nvme0n1 00:12:14.193 20:07:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:14.193 20:07:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:14.474 nvme1n2 00:12:14.474 20:07:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:14.474 20:07:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:14.474 20:07:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:14.474 20:07:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:14.474 20:07:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:14.735 20:07:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:14.735 20:07:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:14.735 20:07:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:14.735 20:07:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:14.735 20:07:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f7a42490-231b-417c-9204-1bbf3ed87e5b == \f\7\a\4\2\4\9\0\-\2\3\1\b\-\4\1\7\c\-\9\2\0\4\-\1\b\b\f\3\e\d\8\7\e\5\b ]] 00:12:14.735 20:07:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:14.735 20:07:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:14.735 20:07:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:14.994 20:07:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2f0ef640-07c8-4be7-9c7f-427bb921bc98 == \2\f\0\e\f\6\4\0\-\0\7\c\8\-\4\b\e\7\-\9\c\7\f\-\4\2\7\b\b\9\2\1\b\c\9\8 ]] 00:12:14.994 20:07:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 886936 00:12:14.994 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 886936 ']' 00:12:14.994 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 886936 00:12:14.994 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:14.994 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:14.994 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 886936 00:12:14.994 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:14.994 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:14.994 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 886936' 00:12:14.994 killing process with pid 886936 00:12:14.994 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 886936 00:12:14.994 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 886936 00:12:15.253 20:07:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:15.550 rmmod nvme_tcp 00:12:15.550 rmmod nvme_fabrics 00:12:15.550 rmmod nvme_keyring 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 884754 ']' 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 884754 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 884754 ']' 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 884754 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 884754 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 884754' 00:12:15.550 killing process with pid 884754 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 884754 00:12:15.550 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 884754 00:12:15.809 20:07:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:15.809 20:07:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:15.809 20:07:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:15.809 20:07:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:15.809 20:07:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:15.809 20:07:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.809 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.809 20:07:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.723 20:07:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:17.723 00:12:17.723 real 0m23.800s 00:12:17.723 user 0m23.675s 00:12:17.723 sys 0m7.269s 00:12:17.723 20:07:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:17.723 20:07:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:17.723 ************************************ 00:12:17.723 END TEST nvmf_ns_masking 00:12:17.723 ************************************ 00:12:17.723 20:07:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:17.723 20:07:15 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:17.723 20:07:15 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:17.723 20:07:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:17.723 20:07:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:17.723 20:07:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:17.723 ************************************ 00:12:17.723 START TEST nvmf_nvme_cli 00:12:17.723 ************************************ 00:12:17.723 20:07:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:17.985 * Looking for test storage... 00:12:17.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:17.985 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.986 20:07:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:17.986 20:07:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.986 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:17.986 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:17.986 20:07:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:17.986 20:07:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:26.154 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:26.154 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:26.154 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.154 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:26.155 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:26.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:12:26.155 00:12:26.155 --- 10.0.0.2 ping statistics --- 00:12:26.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.155 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:12:26.155 00:12:26.155 --- 10.0.0.1 ping statistics --- 00:12:26.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.155 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=891834 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 891834 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 891834 ']' 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:26.155 20:07:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.155 [2024-07-15 20:07:22.553641] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:12:26.155 [2024-07-15 20:07:22.553707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.155 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.155 [2024-07-15 20:07:22.624913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.155 [2024-07-15 20:07:22.701803] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.155 [2024-07-15 20:07:22.701843] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.155 [2024-07-15 20:07:22.701851] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.155 [2024-07-15 20:07:22.701857] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.155 [2024-07-15 20:07:22.701863] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.155 [2024-07-15 20:07:22.702005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.155 [2024-07-15 20:07:22.702136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.155 [2024-07-15 20:07:22.702530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.155 [2024-07-15 20:07:22.702531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.155 [2024-07-15 20:07:23.376715] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.155 Malloc0 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.155 Malloc1 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.155 [2024-07-15 20:07:23.466500] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.155 20:07:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:26.415 00:12:26.415 Discovery Log Number of Records 2, Generation counter 2 00:12:26.415 =====Discovery Log Entry 0====== 00:12:26.415 trtype: tcp 00:12:26.415 adrfam: ipv4 00:12:26.415 subtype: current discovery subsystem 00:12:26.415 treq: not required 00:12:26.415 portid: 0 00:12:26.415 trsvcid: 4420 00:12:26.415 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:26.415 traddr: 10.0.0.2 00:12:26.415 eflags: explicit discovery connections, duplicate discovery information 00:12:26.415 sectype: none 00:12:26.415 =====Discovery Log Entry 1====== 00:12:26.415 trtype: tcp 00:12:26.415 adrfam: ipv4 00:12:26.415 subtype: nvme subsystem 00:12:26.415 treq: not required 00:12:26.415 portid: 0 00:12:26.415 trsvcid: 4420 00:12:26.415 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:26.415 traddr: 10.0.0.2 00:12:26.415 eflags: none 00:12:26.415 sectype: none 00:12:26.415 20:07:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:26.415 20:07:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:26.415 20:07:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:26.415 20:07:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:26.415 20:07:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:26.415 20:07:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:26.415 20:07:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:26.415 20:07:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:26.415 20:07:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:26.415 20:07:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:26.415 20:07:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.798 20:07:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:27.798 20:07:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:27.798 20:07:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.798 20:07:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:27.798 20:07:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:27.798 20:07:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:30.337 /dev/nvme0n1 ]] 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:30.337 rmmod nvme_tcp 00:12:30.337 rmmod nvme_fabrics 00:12:30.337 rmmod nvme_keyring 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 891834 ']' 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 891834 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 891834 ']' 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 891834 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 891834 00:12:30.337 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:30.338 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:30.338 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 891834' 00:12:30.338 killing process with pid 891834 00:12:30.338 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 891834 00:12:30.338 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 891834 00:12:30.338 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:30.338 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:30.338 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:30.338 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.338 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:30.338 20:07:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.338 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.338 20:07:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.883 20:07:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:32.883 00:12:32.883 real 0m14.645s 00:12:32.883 user 0m22.050s 00:12:32.883 sys 0m5.994s 00:12:32.883 20:07:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:32.883 20:07:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.883 ************************************ 00:12:32.883 END TEST nvmf_nvme_cli 00:12:32.883 ************************************ 00:12:32.883 20:07:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:32.883 20:07:29 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:32.883 20:07:29 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:32.883 20:07:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:32.883 20:07:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.883 20:07:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:32.883 ************************************ 00:12:32.883 START TEST nvmf_vfio_user 00:12:32.883 ************************************ 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:32.883 * Looking for test storage... 00:12:32.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:32.883 20:07:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:32.883 20:07:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:32.883 20:07:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:32.883 20:07:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:32.883 20:07:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:32.883 20:07:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=893446 00:12:32.883 20:07:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 893446' 00:12:32.883 Process pid: 893446 00:12:32.883 20:07:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:32.883 20:07:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 893446 00:12:32.883 20:07:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 893446 ']' 00:12:32.883 20:07:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:32.883 20:07:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.883 20:07:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.883 20:07:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.883 20:07:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.883 20:07:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:32.883 [2024-07-15 20:07:30.057614] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:12:32.883 [2024-07-15 20:07:30.057669] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.883 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.883 [2024-07-15 20:07:30.121126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.883 [2024-07-15 20:07:30.186014] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.883 [2024-07-15 20:07:30.186049] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.883 [2024-07-15 20:07:30.186057] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.883 [2024-07-15 20:07:30.186063] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.883 [2024-07-15 20:07:30.186068] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.883 [2024-07-15 20:07:30.186160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.883 [2024-07-15 20:07:30.186246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.883 [2024-07-15 20:07:30.186376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.883 [2024-07-15 20:07:30.186380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.454 20:07:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:33.454 20:07:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:33.454 20:07:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:34.395 20:07:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:34.656 20:07:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:34.656 20:07:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:34.656 20:07:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:34.656 20:07:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:34.656 20:07:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:34.916 Malloc1 00:12:34.916 20:07:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:34.916 20:07:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:35.176 20:07:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:35.437 20:07:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:35.437 20:07:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:35.437 20:07:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:35.437 Malloc2 00:12:35.437 20:07:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:35.697 20:07:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:35.962 20:07:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:35.962 20:07:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:35.962 20:07:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:35.962 20:07:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:35.962 20:07:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:35.962 20:07:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:35.962 20:07:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:36.223 [2024-07-15 20:07:33.397107] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:12:36.223 [2024-07-15 20:07:33.397154] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894139 ] 00:12:36.223 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.223 [2024-07-15 20:07:33.428839] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:36.223 [2024-07-15 20:07:33.437471] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:36.223 [2024-07-15 20:07:33.437491] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2be4bb9000 00:12:36.223 [2024-07-15 20:07:33.438470] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:36.223 [2024-07-15 20:07:33.439469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:36.223 [2024-07-15 20:07:33.440473] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:36.223 [2024-07-15 20:07:33.441486] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:36.223 [2024-07-15 20:07:33.442490] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:36.223 [2024-07-15 20:07:33.443492] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:36.223 [2024-07-15 20:07:33.444499] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:36.223 [2024-07-15 20:07:33.445504] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:36.223 [2024-07-15 20:07:33.446514] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:36.223 [2024-07-15 20:07:33.446523] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2be4bae000 00:12:36.223 [2024-07-15 20:07:33.451850] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:36.223 [2024-07-15 20:07:33.468787] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:36.223 [2024-07-15 20:07:33.468811] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:36.223 [2024-07-15 20:07:33.471644] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:36.223 [2024-07-15 20:07:33.471689] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:36.223 [2024-07-15 20:07:33.471771] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:36.223 [2024-07-15 20:07:33.471787] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:36.223 [2024-07-15 20:07:33.471793] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:36.223 [2024-07-15 20:07:33.472644] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:36.223 [2024-07-15 20:07:33.472653] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:36.223 [2024-07-15 20:07:33.472660] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:36.223 [2024-07-15 20:07:33.473643] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:36.223 [2024-07-15 20:07:33.473652] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:36.223 [2024-07-15 20:07:33.473660] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:36.223 [2024-07-15 20:07:33.474649] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:36.223 [2024-07-15 20:07:33.474657] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:36.223 [2024-07-15 20:07:33.475651] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:36.223 [2024-07-15 20:07:33.475659] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:36.223 [2024-07-15 20:07:33.475664] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:36.223 [2024-07-15 20:07:33.475671] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:36.223 [2024-07-15 20:07:33.475776] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:36.223 [2024-07-15 20:07:33.475781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:36.223 [2024-07-15 20:07:33.475786] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:36.223 [2024-07-15 20:07:33.476660] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:36.223 [2024-07-15 20:07:33.477664] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:36.223 [2024-07-15 20:07:33.478676] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:36.223 [2024-07-15 20:07:33.479677] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:36.223 [2024-07-15 20:07:33.479730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:36.223 [2024-07-15 20:07:33.480689] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:36.223 [2024-07-15 20:07:33.480696] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:36.223 [2024-07-15 20:07:33.480701] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:36.223 [2024-07-15 20:07:33.480722] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:36.223 [2024-07-15 20:07:33.480735] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:36.223 [2024-07-15 20:07:33.480749] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:36.223 [2024-07-15 20:07:33.480754] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:36.223 [2024-07-15 20:07:33.480766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:36.223 [2024-07-15 20:07:33.480798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:36.223 [2024-07-15 20:07:33.480806] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:36.223 [2024-07-15 20:07:33.480813] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:36.223 [2024-07-15 20:07:33.480818] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:36.223 [2024-07-15 20:07:33.480822] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:36.223 [2024-07-15 20:07:33.480827] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:36.223 [2024-07-15 20:07:33.480831] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:36.223 [2024-07-15 20:07:33.480836] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.480843] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.480853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:36.224 [2024-07-15 20:07:33.480860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:36.224 [2024-07-15 20:07:33.480875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.224 [2024-07-15 20:07:33.480883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.224 [2024-07-15 20:07:33.480892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.224 [2024-07-15 20:07:33.480900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.224 [2024-07-15 20:07:33.480905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.480913] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.480922] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:36.224 [2024-07-15 20:07:33.480931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:36.224 [2024-07-15 20:07:33.480937] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:36.224 [2024-07-15 20:07:33.480942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.480950] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.480957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.480966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:36.224 [2024-07-15 20:07:33.480980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:36.224 [2024-07-15 20:07:33.481039] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.481047] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.481054] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:36.224 [2024-07-15 20:07:33.481058] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:36.224 [2024-07-15 20:07:33.481064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:36.224 [2024-07-15 20:07:33.481078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:36.224 [2024-07-15 20:07:33.481087] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:36.224 [2024-07-15 20:07:33.481094] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.481102] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.481109] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:36.224 [2024-07-15 20:07:33.481113] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:36.224 [2024-07-15 20:07:33.481119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:36.224 [2024-07-15 20:07:33.481137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:36.224 [2024-07-15 20:07:33.481149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.481156] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.481163] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:36.224 [2024-07-15 20:07:33.481167] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:36.224 [2024-07-15 20:07:33.481173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:36.224 [2024-07-15 20:07:33.481187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:36.224 [2024-07-15 20:07:33.481194] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.481200] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.481208] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.481215] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.481220] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.481225] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.481230] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:36.224 [2024-07-15 20:07:33.481235] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:36.224 [2024-07-15 20:07:33.481239] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:36.224 [2024-07-15 20:07:33.481258] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:36.224 [2024-07-15 20:07:33.481268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:36.224 [2024-07-15 20:07:33.481279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:36.224 [2024-07-15 20:07:33.481286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:36.224 [2024-07-15 20:07:33.481297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:36.224 [2024-07-15 20:07:33.481306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:36.224 [2024-07-15 20:07:33.481317] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:36.224 [2024-07-15 20:07:33.481324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:36.224 [2024-07-15 20:07:33.481336] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:36.224 [2024-07-15 20:07:33.481341] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:36.224 [2024-07-15 20:07:33.481344] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:36.224 [2024-07-15 20:07:33.481348] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:36.224 [2024-07-15 20:07:33.481354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:36.224 [2024-07-15 20:07:33.481362] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:36.224 [2024-07-15 20:07:33.481366] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:36.224 [2024-07-15 20:07:33.481371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:36.224 [2024-07-15 20:07:33.481378] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:36.224 [2024-07-15 20:07:33.481383] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:36.224 [2024-07-15 20:07:33.481388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:36.224 [2024-07-15 20:07:33.481396] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:36.224 [2024-07-15 20:07:33.481401] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:36.224 [2024-07-15 20:07:33.481409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:36.224 [2024-07-15 20:07:33.481416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:36.224 [2024-07-15 20:07:33.481427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:36.224 [2024-07-15 20:07:33.481437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:36.224 [2024-07-15 20:07:33.481444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:36.224 ===================================================== 00:12:36.224 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:36.224 ===================================================== 00:12:36.224 Controller Capabilities/Features 00:12:36.224 ================================ 00:12:36.224 Vendor ID: 4e58 00:12:36.224 Subsystem Vendor ID: 4e58 00:12:36.224 Serial Number: SPDK1 00:12:36.224 Model Number: SPDK bdev Controller 00:12:36.224 Firmware Version: 24.09 00:12:36.224 Recommended Arb Burst: 6 00:12:36.224 IEEE OUI Identifier: 8d 6b 50 00:12:36.224 Multi-path I/O 00:12:36.224 May have multiple subsystem ports: Yes 00:12:36.224 May have multiple controllers: Yes 00:12:36.224 Associated with SR-IOV VF: No 00:12:36.224 Max Data Transfer Size: 131072 00:12:36.224 Max Number of Namespaces: 32 00:12:36.224 Max Number of I/O Queues: 127 00:12:36.224 NVMe Specification Version (VS): 1.3 00:12:36.224 NVMe Specification Version (Identify): 1.3 00:12:36.224 Maximum Queue Entries: 256 00:12:36.224 Contiguous Queues Required: Yes 00:12:36.224 Arbitration Mechanisms Supported 00:12:36.224 Weighted Round Robin: Not Supported 00:12:36.224 Vendor Specific: Not Supported 00:12:36.224 Reset Timeout: 15000 ms 00:12:36.224 Doorbell Stride: 4 bytes 00:12:36.224 NVM Subsystem Reset: Not Supported 00:12:36.224 Command Sets Supported 00:12:36.224 NVM Command Set: Supported 00:12:36.224 Boot Partition: Not Supported 00:12:36.224 Memory Page Size Minimum: 4096 bytes 00:12:36.224 Memory Page Size Maximum: 4096 bytes 00:12:36.224 Persistent Memory Region: Not Supported 00:12:36.224 Optional Asynchronous Events Supported 00:12:36.224 Namespace Attribute Notices: Supported 00:12:36.224 Firmware Activation Notices: Not Supported 00:12:36.224 ANA Change Notices: Not Supported 00:12:36.224 PLE Aggregate Log Change Notices: Not Supported 00:12:36.224 LBA Status Info Alert Notices: Not Supported 00:12:36.224 EGE Aggregate Log Change Notices: Not Supported 00:12:36.224 Normal NVM Subsystem Shutdown event: Not Supported 00:12:36.224 Zone Descriptor Change Notices: Not Supported 00:12:36.224 Discovery Log Change Notices: Not Supported 00:12:36.224 Controller Attributes 00:12:36.224 128-bit Host Identifier: Supported 00:12:36.224 Non-Operational Permissive Mode: Not Supported 00:12:36.224 NVM Sets: Not Supported 00:12:36.224 Read Recovery Levels: Not Supported 00:12:36.224 Endurance Groups: Not Supported 00:12:36.224 Predictable Latency Mode: Not Supported 00:12:36.224 Traffic Based Keep ALive: Not Supported 00:12:36.224 Namespace Granularity: Not Supported 00:12:36.224 SQ Associations: Not Supported 00:12:36.224 UUID List: Not Supported 00:12:36.224 Multi-Domain Subsystem: Not Supported 00:12:36.224 Fixed Capacity Management: Not Supported 00:12:36.224 Variable Capacity Management: Not Supported 00:12:36.224 Delete Endurance Group: Not Supported 00:12:36.224 Delete NVM Set: Not Supported 00:12:36.224 Extended LBA Formats Supported: Not Supported 00:12:36.224 Flexible Data Placement Supported: Not Supported 00:12:36.224 00:12:36.224 Controller Memory Buffer Support 00:12:36.224 ================================ 00:12:36.224 Supported: No 00:12:36.224 00:12:36.224 Persistent Memory Region Support 00:12:36.224 ================================ 00:12:36.224 Supported: No 00:12:36.224 00:12:36.224 Admin Command Set Attributes 00:12:36.224 ============================ 00:12:36.224 Security Send/Receive: Not Supported 00:12:36.224 Format NVM: Not Supported 00:12:36.224 Firmware Activate/Download: Not Supported 00:12:36.224 Namespace Management: Not Supported 00:12:36.224 Device Self-Test: Not Supported 00:12:36.224 Directives: Not Supported 00:12:36.224 NVMe-MI: Not Supported 00:12:36.224 Virtualization Management: Not Supported 00:12:36.224 Doorbell Buffer Config: Not Supported 00:12:36.224 Get LBA Status Capability: Not Supported 00:12:36.224 Command & Feature Lockdown Capability: Not Supported 00:12:36.224 Abort Command Limit: 4 00:12:36.224 Async Event Request Limit: 4 00:12:36.224 Number of Firmware Slots: N/A 00:12:36.224 Firmware Slot 1 Read-Only: N/A 00:12:36.224 Firmware Activation Without Reset: N/A 00:12:36.224 Multiple Update Detection Support: N/A 00:12:36.224 Firmware Update Granularity: No Information Provided 00:12:36.224 Per-Namespace SMART Log: No 00:12:36.224 Asymmetric Namespace Access Log Page: Not Supported 00:12:36.224 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:36.225 Command Effects Log Page: Supported 00:12:36.225 Get Log Page Extended Data: Supported 00:12:36.225 Telemetry Log Pages: Not Supported 00:12:36.225 Persistent Event Log Pages: Not Supported 00:12:36.225 Supported Log Pages Log Page: May Support 00:12:36.225 Commands Supported & Effects Log Page: Not Supported 00:12:36.225 Feature Identifiers & Effects Log Page:May Support 00:12:36.225 NVMe-MI Commands & Effects Log Page: May Support 00:12:36.225 Data Area 4 for Telemetry Log: Not Supported 00:12:36.225 Error Log Page Entries Supported: 128 00:12:36.225 Keep Alive: Supported 00:12:36.225 Keep Alive Granularity: 10000 ms 00:12:36.225 00:12:36.225 NVM Command Set Attributes 00:12:36.225 ========================== 00:12:36.225 Submission Queue Entry Size 00:12:36.225 Max: 64 00:12:36.225 Min: 64 00:12:36.225 Completion Queue Entry Size 00:12:36.225 Max: 16 00:12:36.225 Min: 16 00:12:36.225 Number of Namespaces: 32 00:12:36.225 Compare Command: Supported 00:12:36.225 Write Uncorrectable Command: Not Supported 00:12:36.225 Dataset Management Command: Supported 00:12:36.225 Write Zeroes Command: Supported 00:12:36.225 Set Features Save Field: Not Supported 00:12:36.225 Reservations: Not Supported 00:12:36.225 Timestamp: Not Supported 00:12:36.225 Copy: Supported 00:12:36.225 Volatile Write Cache: Present 00:12:36.225 Atomic Write Unit (Normal): 1 00:12:36.225 Atomic Write Unit (PFail): 1 00:12:36.225 Atomic Compare & Write Unit: 1 00:12:36.225 Fused Compare & Write: Supported 00:12:36.225 Scatter-Gather List 00:12:36.225 SGL Command Set: Supported (Dword aligned) 00:12:36.225 SGL Keyed: Not Supported 00:12:36.225 SGL Bit Bucket Descriptor: Not Supported 00:12:36.225 SGL Metadata Pointer: Not Supported 00:12:36.225 Oversized SGL: Not Supported 00:12:36.225 SGL Metadata Address: Not Supported 00:12:36.225 SGL Offset: Not Supported 00:12:36.225 Transport SGL Data Block: Not Supported 00:12:36.225 Replay Protected Memory Block: Not Supported 00:12:36.225 00:12:36.225 Firmware Slot Information 00:12:36.225 ========================= 00:12:36.225 Active slot: 1 00:12:36.225 Slot 1 Firmware Revision: 24.09 00:12:36.225 00:12:36.225 00:12:36.225 Commands Supported and Effects 00:12:36.225 ============================== 00:12:36.225 Admin Commands 00:12:36.225 -------------- 00:12:36.225 Get Log Page (02h): Supported 00:12:36.225 Identify (06h): Supported 00:12:36.225 Abort (08h): Supported 00:12:36.225 Set Features (09h): Supported 00:12:36.225 Get Features (0Ah): Supported 00:12:36.225 Asynchronous Event Request (0Ch): Supported 00:12:36.225 Keep Alive (18h): Supported 00:12:36.225 I/O Commands 00:12:36.225 ------------ 00:12:36.225 Flush (00h): Supported LBA-Change 00:12:36.225 Write (01h): Supported LBA-Change 00:12:36.225 Read (02h): Supported 00:12:36.225 Compare (05h): Supported 00:12:36.225 Write Zeroes (08h): Supported LBA-Change 00:12:36.225 Dataset Management (09h): Supported LBA-Change 00:12:36.225 Copy (19h): Supported LBA-Change 00:12:36.225 00:12:36.225 Error Log 00:12:36.225 ========= 00:12:36.225 00:12:36.225 Arbitration 00:12:36.225 =========== 00:12:36.225 Arbitration Burst: 1 00:12:36.225 00:12:36.225 Power Management 00:12:36.225 ================ 00:12:36.225 Number of Power States: 1 00:12:36.225 Current Power State: Power State #0 00:12:36.225 Power State #0: 00:12:36.225 Max Power: 0.00 W 00:12:36.225 Non-Operational State: Operational 00:12:36.225 Entry Latency: Not Reported 00:12:36.225 Exit Latency: Not Reported 00:12:36.225 Relative Read Throughput: 0 00:12:36.225 Relative Read Latency: 0 00:12:36.225 Relative Write Throughput: 0 00:12:36.225 Relative Write Latency: 0 00:12:36.225 Idle Power: Not Reported 00:12:36.225 Active Power: Not Reported 00:12:36.225 Non-Operational Permissive Mode: Not Supported 00:12:36.225 00:12:36.225 Health Information 00:12:36.225 ================== 00:12:36.225 Critical Warnings: 00:12:36.225 Available Spare Space: OK 00:12:36.225 Temperature: OK 00:12:36.225 Device Reliability: OK 00:12:36.225 Read Only: No 00:12:36.225 Volatile Memory Backup: OK 00:12:36.225 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:36.225 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:36.225 Available Spare: 0% 00:12:36.225 Available Sp[2024-07-15 20:07:33.481543] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:36.225 [2024-07-15 20:07:33.481552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:36.225 [2024-07-15 20:07:33.481579] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:36.225 [2024-07-15 20:07:33.481588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.225 [2024-07-15 20:07:33.481595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.225 [2024-07-15 20:07:33.481601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.225 [2024-07-15 20:07:33.481607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.225 [2024-07-15 20:07:33.483130] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:36.225 [2024-07-15 20:07:33.483142] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:36.225 [2024-07-15 20:07:33.483702] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:36.225 [2024-07-15 20:07:33.483740] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:36.225 [2024-07-15 20:07:33.483745] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:36.225 [2024-07-15 20:07:33.484709] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:36.225 [2024-07-15 20:07:33.484719] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:36.225 [2024-07-15 20:07:33.484781] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:36.225 [2024-07-15 20:07:33.488131] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:36.225 are Threshold: 0% 00:12:36.225 Life Percentage Used: 0% 00:12:36.225 Data Units Read: 0 00:12:36.225 Data Units Written: 0 00:12:36.225 Host Read Commands: 0 00:12:36.225 Host Write Commands: 0 00:12:36.225 Controller Busy Time: 0 minutes 00:12:36.225 Power Cycles: 0 00:12:36.225 Power On Hours: 0 hours 00:12:36.225 Unsafe Shutdowns: 0 00:12:36.225 Unrecoverable Media Errors: 0 00:12:36.225 Lifetime Error Log Entries: 0 00:12:36.225 Warning Temperature Time: 0 minutes 00:12:36.225 Critical Temperature Time: 0 minutes 00:12:36.225 00:12:36.225 Number of Queues 00:12:36.225 ================ 00:12:36.225 Number of I/O Submission Queues: 127 00:12:36.225 Number of I/O Completion Queues: 127 00:12:36.225 00:12:36.225 Active Namespaces 00:12:36.225 ================= 00:12:36.225 Namespace ID:1 00:12:36.225 Error Recovery Timeout: Unlimited 00:12:36.225 Command Set Identifier: NVM (00h) 00:12:36.225 Deallocate: Supported 00:12:36.225 Deallocated/Unwritten Error: Not Supported 00:12:36.225 Deallocated Read Value: Unknown 00:12:36.225 Deallocate in Write Zeroes: Not Supported 00:12:36.225 Deallocated Guard Field: 0xFFFF 00:12:36.225 Flush: Supported 00:12:36.225 Reservation: Supported 00:12:36.225 Namespace Sharing Capabilities: Multiple Controllers 00:12:36.225 Size (in LBAs): 131072 (0GiB) 00:12:36.225 Capacity (in LBAs): 131072 (0GiB) 00:12:36.225 Utilization (in LBAs): 131072 (0GiB) 00:12:36.225 NGUID: C406BE85750E4D9D95B581EF07FACA09 00:12:36.225 UUID: c406be85-750e-4d9d-95b5-81ef07faca09 00:12:36.225 Thin Provisioning: Not Supported 00:12:36.225 Per-NS Atomic Units: Yes 00:12:36.225 Atomic Boundary Size (Normal): 0 00:12:36.225 Atomic Boundary Size (PFail): 0 00:12:36.225 Atomic Boundary Offset: 0 00:12:36.225 Maximum Single Source Range Length: 65535 00:12:36.225 Maximum Copy Length: 65535 00:12:36.225 Maximum Source Range Count: 1 00:12:36.225 NGUID/EUI64 Never Reused: No 00:12:36.225 Namespace Write Protected: No 00:12:36.225 Number of LBA Formats: 1 00:12:36.225 Current LBA Format: LBA Format #00 00:12:36.225 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:36.225 00:12:36.225 20:07:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:36.225 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.486 [2024-07-15 20:07:33.671715] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:41.797 Initializing NVMe Controllers 00:12:41.797 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:41.797 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:41.797 Initialization complete. Launching workers. 00:12:41.797 ======================================================== 00:12:41.797 Latency(us) 00:12:41.797 Device Information : IOPS MiB/s Average min max 00:12:41.797 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39973.20 156.15 3204.90 837.58 9783.67 00:12:41.797 ======================================================== 00:12:41.797 Total : 39973.20 156.15 3204.90 837.58 9783.67 00:12:41.797 00:12:41.797 [2024-07-15 20:07:38.691843] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:41.797 20:07:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:41.797 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.797 [2024-07-15 20:07:38.871664] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:47.081 Initializing NVMe Controllers 00:12:47.081 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:47.081 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:47.081 Initialization complete. Launching workers. 00:12:47.081 ======================================================== 00:12:47.081 Latency(us) 00:12:47.081 Device Information : IOPS MiB/s Average min max 00:12:47.081 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16053.17 62.71 7979.07 4986.39 9977.38 00:12:47.081 ======================================================== 00:12:47.081 Total : 16053.17 62.71 7979.07 4986.39 9977.38 00:12:47.081 00:12:47.081 [2024-07-15 20:07:43.912249] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:47.081 20:07:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:47.081 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.081 [2024-07-15 20:07:44.106130] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:52.367 [2024-07-15 20:07:49.188341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:52.367 Initializing NVMe Controllers 00:12:52.367 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:52.367 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:52.367 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:52.367 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:52.367 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:52.367 Initialization complete. Launching workers. 00:12:52.367 Starting thread on core 2 00:12:52.367 Starting thread on core 3 00:12:52.367 Starting thread on core 1 00:12:52.367 20:07:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:52.367 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.367 [2024-07-15 20:07:49.445463] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:55.683 [2024-07-15 20:07:52.508335] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:55.683 Initializing NVMe Controllers 00:12:55.683 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:55.683 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:55.683 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:55.683 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:55.683 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:55.683 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:55.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:55.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:55.683 Initialization complete. Launching workers. 00:12:55.683 Starting thread on core 1 with urgent priority queue 00:12:55.683 Starting thread on core 2 with urgent priority queue 00:12:55.683 Starting thread on core 3 with urgent priority queue 00:12:55.683 Starting thread on core 0 with urgent priority queue 00:12:55.683 SPDK bdev Controller (SPDK1 ) core 0: 8885.67 IO/s 11.25 secs/100000 ios 00:12:55.683 SPDK bdev Controller (SPDK1 ) core 1: 10874.00 IO/s 9.20 secs/100000 ios 00:12:55.683 SPDK bdev Controller (SPDK1 ) core 2: 8583.67 IO/s 11.65 secs/100000 ios 00:12:55.683 SPDK bdev Controller (SPDK1 ) core 3: 15542.33 IO/s 6.43 secs/100000 ios 00:12:55.683 ======================================================== 00:12:55.683 00:12:55.683 20:07:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:55.683 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.683 [2024-07-15 20:07:52.771596] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:55.683 Initializing NVMe Controllers 00:12:55.683 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:55.683 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:55.683 Namespace ID: 1 size: 0GB 00:12:55.683 Initialization complete. 00:12:55.683 INFO: using host memory buffer for IO 00:12:55.683 Hello world! 00:12:55.683 [2024-07-15 20:07:52.805781] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:55.683 20:07:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:55.683 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.684 [2024-07-15 20:07:53.058549] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:57.068 Initializing NVMe Controllers 00:12:57.068 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:57.068 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:57.068 Initialization complete. Launching workers. 00:12:57.068 submit (in ns) avg, min, max = 9455.9, 3903.3, 3999573.3 00:12:57.068 complete (in ns) avg, min, max = 16591.1, 2371.7, 3998698.3 00:12:57.068 00:12:57.068 Submit histogram 00:12:57.068 ================ 00:12:57.068 Range in us Cumulative Count 00:12:57.068 3.893 - 3.920: 1.0169% ( 196) 00:12:57.068 3.920 - 3.947: 5.9922% ( 959) 00:12:57.068 3.947 - 3.973: 15.3152% ( 1797) 00:12:57.068 3.973 - 4.000: 25.0532% ( 1877) 00:12:57.068 4.000 - 4.027: 36.1505% ( 2139) 00:12:57.068 4.027 - 4.053: 48.3995% ( 2361) 00:12:57.068 4.053 - 4.080: 64.2905% ( 3063) 00:12:57.068 4.080 - 4.107: 78.9468% ( 2825) 00:12:57.068 4.107 - 4.133: 89.6965% ( 2072) 00:12:57.068 4.133 - 4.160: 95.3411% ( 1088) 00:12:57.068 4.160 - 4.187: 97.9715% ( 507) 00:12:57.068 4.187 - 4.213: 98.9624% ( 191) 00:12:57.068 4.213 - 4.240: 99.2737% ( 60) 00:12:57.068 4.240 - 4.267: 99.3982% ( 24) 00:12:57.068 4.267 - 4.293: 99.4189% ( 4) 00:12:57.068 4.293 - 4.320: 99.4293% ( 2) 00:12:57.068 4.320 - 4.347: 99.4345% ( 1) 00:12:57.068 4.347 - 4.373: 99.4397% ( 1) 00:12:57.068 4.400 - 4.427: 99.4449% ( 1) 00:12:57.068 4.453 - 4.480: 99.4501% ( 1) 00:12:57.068 4.587 - 4.613: 99.4553% ( 1) 00:12:57.068 4.720 - 4.747: 99.4604% ( 1) 00:12:57.068 4.827 - 4.853: 99.4656% ( 1) 00:12:57.068 5.093 - 5.120: 99.4708% ( 1) 00:12:57.068 5.147 - 5.173: 99.4760% ( 1) 00:12:57.068 5.173 - 5.200: 99.4916% ( 3) 00:12:57.068 5.387 - 5.413: 99.4968% ( 1) 00:12:57.068 5.600 - 5.627: 99.5019% ( 1) 00:12:57.068 5.760 - 5.787: 99.5071% ( 1) 00:12:57.068 5.947 - 5.973: 99.5123% ( 1) 00:12:57.068 5.973 - 6.000: 99.5227% ( 2) 00:12:57.068 6.000 - 6.027: 99.5279% ( 1) 00:12:57.068 6.027 - 6.053: 99.5331% ( 1) 00:12:57.068 6.053 - 6.080: 99.5435% ( 2) 00:12:57.068 6.107 - 6.133: 99.5694% ( 5) 00:12:57.068 6.160 - 6.187: 99.5746% ( 1) 00:12:57.068 6.187 - 6.213: 99.5850% ( 2) 00:12:57.068 6.213 - 6.240: 99.5953% ( 2) 00:12:57.068 6.240 - 6.267: 99.6005% ( 1) 00:12:57.068 6.320 - 6.347: 99.6057% ( 1) 00:12:57.068 6.400 - 6.427: 99.6109% ( 1) 00:12:57.068 6.427 - 6.453: 99.6213% ( 2) 00:12:57.068 6.560 - 6.587: 99.6265% ( 1) 00:12:57.068 6.587 - 6.613: 99.6316% ( 1) 00:12:57.068 6.640 - 6.667: 99.6368% ( 1) 00:12:57.068 6.747 - 6.773: 99.6420% ( 1) 00:12:57.068 6.773 - 6.800: 99.6472% ( 1) 00:12:57.068 6.800 - 6.827: 99.6576% ( 2) 00:12:57.068 6.827 - 6.880: 99.6628% ( 1) 00:12:57.068 6.880 - 6.933: 99.6680% ( 1) 00:12:57.068 6.933 - 6.987: 99.6732% ( 1) 00:12:57.068 6.987 - 7.040: 99.6783% ( 1) 00:12:57.068 7.093 - 7.147: 99.6939% ( 3) 00:12:57.068 7.200 - 7.253: 99.7095% ( 3) 00:12:57.068 7.253 - 7.307: 99.7250% ( 3) 00:12:57.068 7.307 - 7.360: 99.7458% ( 4) 00:12:57.068 7.360 - 7.413: 99.7510% ( 1) 00:12:57.068 7.413 - 7.467: 99.7613% ( 2) 00:12:57.068 7.467 - 7.520: 99.7769% ( 3) 00:12:57.068 7.520 - 7.573: 99.7821% ( 1) 00:12:57.068 7.680 - 7.733: 99.7925% ( 2) 00:12:57.068 7.840 - 7.893: 99.7977% ( 1) 00:12:57.068 7.947 - 8.000: 99.8029% ( 1) 00:12:57.068 8.000 - 8.053: 99.8080% ( 1) 00:12:57.068 8.160 - 8.213: 99.8132% ( 1) 00:12:57.068 8.213 - 8.267: 99.8184% ( 1) 00:12:57.068 8.640 - 8.693: 99.8236% ( 1) 00:12:57.068 8.693 - 8.747: 99.8288% ( 1) 00:12:57.068 8.800 - 8.853: 99.8340% ( 1) 00:12:57.068 8.960 - 9.013: 99.8392% ( 1) 00:12:57.068 11.840 - 11.893: 99.8444% ( 1) 00:12:57.068 13.867 - 13.973: 99.8495% ( 1) 00:12:57.068 15.680 - 15.787: 99.8547% ( 1) 00:12:57.068 16.320 - 16.427: 99.8599% ( 1) 00:12:57.069 96.427 - 96.853: 99.8651% ( 1) 00:12:57.069 3986.773 - 4014.080: 100.0000% ( 26) 00:12:57.069 00:12:57.069 Complete histogram 00:12:57.069 ================== 00:12:57.069 Range in us Cumulative Count 00:12:57.069 2.360 - 2.373: 0.0104% ( 2) 00:12:57.069 2.387 - [2024-07-15 20:07:54.080944] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:57.069 2.400: 0.9857% ( 188) 00:12:57.069 2.400 - 2.413: 1.2399% ( 49) 00:12:57.069 2.413 - 2.427: 1.3904% ( 29) 00:12:57.069 2.427 - 2.440: 1.4682% ( 15) 00:12:57.069 2.440 - 2.453: 1.4942% ( 5) 00:12:57.069 2.453 - 2.467: 36.1038% ( 6671) 00:12:57.069 2.467 - 2.480: 56.7315% ( 3976) 00:12:57.069 2.480 - 2.493: 68.4929% ( 2267) 00:12:57.069 2.493 - 2.507: 78.0908% ( 1850) 00:12:57.069 2.507 - 2.520: 81.5201% ( 661) 00:12:57.069 2.520 - 2.533: 83.7198% ( 424) 00:12:57.069 2.533 - 2.547: 89.8366% ( 1179) 00:12:57.069 2.547 - 2.560: 94.3761% ( 875) 00:12:57.069 2.560 - 2.573: 96.7990% ( 467) 00:12:57.069 2.573 - 2.587: 98.5888% ( 345) 00:12:57.069 2.587 - 2.600: 99.2374% ( 125) 00:12:57.069 2.600 - 2.613: 99.3774% ( 27) 00:12:57.069 2.613 - 2.627: 99.3930% ( 3) 00:12:57.069 2.627 - 2.640: 99.4034% ( 2) 00:12:57.069 2.640 - 2.653: 99.4086% ( 1) 00:12:57.069 2.707 - 2.720: 99.4137% ( 1) 00:12:57.069 2.733 - 2.747: 99.4189% ( 1) 00:12:57.069 2.800 - 2.813: 99.4241% ( 1) 00:12:57.069 4.347 - 4.373: 99.4293% ( 1) 00:12:57.069 4.507 - 4.533: 99.4345% ( 1) 00:12:57.069 4.613 - 4.640: 99.4449% ( 2) 00:12:57.069 4.693 - 4.720: 99.4501% ( 1) 00:12:57.069 4.720 - 4.747: 99.4553% ( 1) 00:12:57.069 4.827 - 4.853: 99.4604% ( 1) 00:12:57.069 4.880 - 4.907: 99.4656% ( 1) 00:12:57.069 4.960 - 4.987: 99.4708% ( 1) 00:12:57.069 4.987 - 5.013: 99.4760% ( 1) 00:12:57.069 5.013 - 5.040: 99.4812% ( 1) 00:12:57.069 5.040 - 5.067: 99.4864% ( 1) 00:12:57.069 5.120 - 5.147: 99.4916% ( 1) 00:12:57.069 5.253 - 5.280: 99.5019% ( 2) 00:12:57.069 5.413 - 5.440: 99.5071% ( 1) 00:12:57.069 5.520 - 5.547: 99.5123% ( 1) 00:12:57.069 5.547 - 5.573: 99.5227% ( 2) 00:12:57.069 5.573 - 5.600: 99.5279% ( 1) 00:12:57.069 5.680 - 5.707: 99.5331% ( 1) 00:12:57.069 5.707 - 5.733: 99.5383% ( 1) 00:12:57.069 5.760 - 5.787: 99.5435% ( 1) 00:12:57.069 5.787 - 5.813: 99.5486% ( 1) 00:12:57.069 5.840 - 5.867: 99.5590% ( 2) 00:12:57.069 5.867 - 5.893: 99.5642% ( 1) 00:12:57.069 5.973 - 6.000: 99.5694% ( 1) 00:12:57.069 6.027 - 6.053: 99.5746% ( 1) 00:12:57.069 6.053 - 6.080: 99.5798% ( 1) 00:12:57.069 6.133 - 6.160: 99.5850% ( 1) 00:12:57.069 6.213 - 6.240: 99.5901% ( 1) 00:12:57.069 6.267 - 6.293: 99.5953% ( 1) 00:12:57.069 6.320 - 6.347: 99.6057% ( 2) 00:12:57.069 6.987 - 7.040: 99.6109% ( 1) 00:12:57.069 7.040 - 7.093: 99.6161% ( 1) 00:12:57.069 7.147 - 7.200: 99.6213% ( 1) 00:12:57.069 11.573 - 11.627: 99.6265% ( 1) 00:12:57.069 12.800 - 12.853: 99.6316% ( 1) 00:12:57.069 13.493 - 13.547: 99.6368% ( 1) 00:12:57.069 13.547 - 13.600: 99.6420% ( 1) 00:12:57.069 42.453 - 42.667: 99.6472% ( 1) 00:12:57.069 3986.773 - 4014.080: 100.0000% ( 68) 00:12:57.069 00:12:57.069 20:07:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:57.069 20:07:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:57.069 20:07:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:57.069 20:07:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:57.069 20:07:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:57.069 [ 00:12:57.069 { 00:12:57.069 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:57.069 "subtype": "Discovery", 00:12:57.069 "listen_addresses": [], 00:12:57.069 "allow_any_host": true, 00:12:57.069 "hosts": [] 00:12:57.069 }, 00:12:57.069 { 00:12:57.069 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:57.069 "subtype": "NVMe", 00:12:57.069 "listen_addresses": [ 00:12:57.069 { 00:12:57.069 "trtype": "VFIOUSER", 00:12:57.069 "adrfam": "IPv4", 00:12:57.069 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:57.069 "trsvcid": "0" 00:12:57.069 } 00:12:57.069 ], 00:12:57.069 "allow_any_host": true, 00:12:57.069 "hosts": [], 00:12:57.069 "serial_number": "SPDK1", 00:12:57.069 "model_number": "SPDK bdev Controller", 00:12:57.069 "max_namespaces": 32, 00:12:57.069 "min_cntlid": 1, 00:12:57.069 "max_cntlid": 65519, 00:12:57.069 "namespaces": [ 00:12:57.069 { 00:12:57.069 "nsid": 1, 00:12:57.069 "bdev_name": "Malloc1", 00:12:57.069 "name": "Malloc1", 00:12:57.069 "nguid": "C406BE85750E4D9D95B581EF07FACA09", 00:12:57.069 "uuid": "c406be85-750e-4d9d-95b5-81ef07faca09" 00:12:57.069 } 00:12:57.069 ] 00:12:57.069 }, 00:12:57.069 { 00:12:57.069 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:57.069 "subtype": "NVMe", 00:12:57.069 "listen_addresses": [ 00:12:57.069 { 00:12:57.069 "trtype": "VFIOUSER", 00:12:57.069 "adrfam": "IPv4", 00:12:57.069 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:57.069 "trsvcid": "0" 00:12:57.069 } 00:12:57.069 ], 00:12:57.069 "allow_any_host": true, 00:12:57.069 "hosts": [], 00:12:57.069 "serial_number": "SPDK2", 00:12:57.069 "model_number": "SPDK bdev Controller", 00:12:57.069 "max_namespaces": 32, 00:12:57.069 "min_cntlid": 1, 00:12:57.069 "max_cntlid": 65519, 00:12:57.069 "namespaces": [ 00:12:57.069 { 00:12:57.069 "nsid": 1, 00:12:57.069 "bdev_name": "Malloc2", 00:12:57.069 "name": "Malloc2", 00:12:57.069 "nguid": "3BAB77430C3E4246A68CE758C3EF6834", 00:12:57.069 "uuid": "3bab7743-0c3e-4246-a68c-e758c3ef6834" 00:12:57.069 } 00:12:57.069 ] 00:12:57.069 } 00:12:57.069 ] 00:12:57.069 20:07:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:57.069 20:07:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=898173 00:12:57.069 20:07:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:57.069 20:07:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:57.069 20:07:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:57.069 20:07:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:57.069 20:07:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:57.069 20:07:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:57.069 20:07:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:57.069 20:07:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:57.069 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.069 Malloc3 00:12:57.069 [2024-07-15 20:07:54.473508] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:57.069 20:07:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:57.330 [2024-07-15 20:07:54.643597] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:57.330 20:07:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:57.330 Asynchronous Event Request test 00:12:57.330 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:57.330 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:57.330 Registering asynchronous event callbacks... 00:12:57.330 Starting namespace attribute notice tests for all controllers... 00:12:57.330 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:57.330 aer_cb - Changed Namespace 00:12:57.330 Cleaning up... 00:12:57.592 [ 00:12:57.592 { 00:12:57.592 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:57.592 "subtype": "Discovery", 00:12:57.592 "listen_addresses": [], 00:12:57.592 "allow_any_host": true, 00:12:57.592 "hosts": [] 00:12:57.592 }, 00:12:57.592 { 00:12:57.592 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:57.592 "subtype": "NVMe", 00:12:57.592 "listen_addresses": [ 00:12:57.592 { 00:12:57.592 "trtype": "VFIOUSER", 00:12:57.592 "adrfam": "IPv4", 00:12:57.592 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:57.592 "trsvcid": "0" 00:12:57.592 } 00:12:57.592 ], 00:12:57.592 "allow_any_host": true, 00:12:57.592 "hosts": [], 00:12:57.592 "serial_number": "SPDK1", 00:12:57.592 "model_number": "SPDK bdev Controller", 00:12:57.592 "max_namespaces": 32, 00:12:57.592 "min_cntlid": 1, 00:12:57.592 "max_cntlid": 65519, 00:12:57.592 "namespaces": [ 00:12:57.592 { 00:12:57.592 "nsid": 1, 00:12:57.592 "bdev_name": "Malloc1", 00:12:57.592 "name": "Malloc1", 00:12:57.592 "nguid": "C406BE85750E4D9D95B581EF07FACA09", 00:12:57.592 "uuid": "c406be85-750e-4d9d-95b5-81ef07faca09" 00:12:57.592 }, 00:12:57.592 { 00:12:57.592 "nsid": 2, 00:12:57.592 "bdev_name": "Malloc3", 00:12:57.592 "name": "Malloc3", 00:12:57.592 "nguid": "632C582A3A1A4D9AB53C2A58E38C3AA6", 00:12:57.592 "uuid": "632c582a-3a1a-4d9a-b53c-2a58e38c3aa6" 00:12:57.592 } 00:12:57.592 ] 00:12:57.592 }, 00:12:57.592 { 00:12:57.592 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:57.592 "subtype": "NVMe", 00:12:57.592 "listen_addresses": [ 00:12:57.592 { 00:12:57.592 "trtype": "VFIOUSER", 00:12:57.592 "adrfam": "IPv4", 00:12:57.592 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:57.592 "trsvcid": "0" 00:12:57.592 } 00:12:57.592 ], 00:12:57.592 "allow_any_host": true, 00:12:57.592 "hosts": [], 00:12:57.592 "serial_number": "SPDK2", 00:12:57.592 "model_number": "SPDK bdev Controller", 00:12:57.592 "max_namespaces": 32, 00:12:57.592 "min_cntlid": 1, 00:12:57.592 "max_cntlid": 65519, 00:12:57.592 "namespaces": [ 00:12:57.592 { 00:12:57.592 "nsid": 1, 00:12:57.592 "bdev_name": "Malloc2", 00:12:57.592 "name": "Malloc2", 00:12:57.592 "nguid": "3BAB77430C3E4246A68CE758C3EF6834", 00:12:57.592 "uuid": "3bab7743-0c3e-4246-a68c-e758c3ef6834" 00:12:57.592 } 00:12:57.592 ] 00:12:57.592 } 00:12:57.592 ] 00:12:57.592 20:07:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 898173 00:12:57.592 20:07:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:57.592 20:07:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:57.592 20:07:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:57.592 20:07:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:57.592 [2024-07-15 20:07:54.852109] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:12:57.592 [2024-07-15 20:07:54.852157] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid898185 ] 00:12:57.592 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.592 [2024-07-15 20:07:54.883697] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:57.592 [2024-07-15 20:07:54.892356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:57.592 [2024-07-15 20:07:54.892377] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9163b22000 00:12:57.592 [2024-07-15 20:07:54.893362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:57.592 [2024-07-15 20:07:54.894364] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:57.592 [2024-07-15 20:07:54.895370] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:57.592 [2024-07-15 20:07:54.896374] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:57.592 [2024-07-15 20:07:54.897383] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:57.592 [2024-07-15 20:07:54.898388] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:57.592 [2024-07-15 20:07:54.899402] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:57.592 [2024-07-15 20:07:54.900412] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:57.592 [2024-07-15 20:07:54.901421] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:57.592 [2024-07-15 20:07:54.901432] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9163b17000 00:12:57.592 [2024-07-15 20:07:54.902756] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:57.592 [2024-07-15 20:07:54.918964] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:57.593 [2024-07-15 20:07:54.918988] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:57.593 [2024-07-15 20:07:54.924070] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:57.593 [2024-07-15 20:07:54.924117] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:57.593 [2024-07-15 20:07:54.924202] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:57.593 [2024-07-15 20:07:54.924220] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:57.593 [2024-07-15 20:07:54.924225] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:57.593 [2024-07-15 20:07:54.925076] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:57.593 [2024-07-15 20:07:54.925085] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:57.593 [2024-07-15 20:07:54.925093] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:57.593 [2024-07-15 20:07:54.926078] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:57.593 [2024-07-15 20:07:54.926087] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:57.593 [2024-07-15 20:07:54.926094] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:57.593 [2024-07-15 20:07:54.927091] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:57.593 [2024-07-15 20:07:54.927100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:57.593 [2024-07-15 20:07:54.928099] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:57.593 [2024-07-15 20:07:54.928107] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:57.593 [2024-07-15 20:07:54.928112] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:57.593 [2024-07-15 20:07:54.928119] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:57.593 [2024-07-15 20:07:54.928227] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:57.593 [2024-07-15 20:07:54.928232] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:57.593 [2024-07-15 20:07:54.928240] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:57.593 [2024-07-15 20:07:54.929111] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:57.593 [2024-07-15 20:07:54.930117] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:57.593 [2024-07-15 20:07:54.931120] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:57.593 [2024-07-15 20:07:54.932125] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:57.593 [2024-07-15 20:07:54.932165] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:57.593 [2024-07-15 20:07:54.933136] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:57.593 [2024-07-15 20:07:54.933145] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:57.593 [2024-07-15 20:07:54.933150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.933171] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:57.593 [2024-07-15 20:07:54.933178] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.933191] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:57.593 [2024-07-15 20:07:54.933196] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:57.593 [2024-07-15 20:07:54.933208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:57.593 [2024-07-15 20:07:54.942129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:57.593 [2024-07-15 20:07:54.942141] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:57.593 [2024-07-15 20:07:54.942148] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:57.593 [2024-07-15 20:07:54.942153] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:57.593 [2024-07-15 20:07:54.942157] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:57.593 [2024-07-15 20:07:54.942162] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:57.593 [2024-07-15 20:07:54.942167] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:57.593 [2024-07-15 20:07:54.942171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.942179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.942190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:57.593 [2024-07-15 20:07:54.950128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:57.593 [2024-07-15 20:07:54.950144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:57.593 [2024-07-15 20:07:54.950155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:57.593 [2024-07-15 20:07:54.950163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:57.593 [2024-07-15 20:07:54.950172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:57.593 [2024-07-15 20:07:54.950177] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.950185] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.950194] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:57.593 [2024-07-15 20:07:54.958130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:57.593 [2024-07-15 20:07:54.958138] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:57.593 [2024-07-15 20:07:54.958143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.958149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.958155] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.958164] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:57.593 [2024-07-15 20:07:54.966128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:57.593 [2024-07-15 20:07:54.966193] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.966202] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.966209] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:57.593 [2024-07-15 20:07:54.966214] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:57.593 [2024-07-15 20:07:54.966220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:57.593 [2024-07-15 20:07:54.974128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:57.593 [2024-07-15 20:07:54.974139] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:57.593 [2024-07-15 20:07:54.974152] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.974159] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.974166] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:57.593 [2024-07-15 20:07:54.974171] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:57.593 [2024-07-15 20:07:54.974177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:57.593 [2024-07-15 20:07:54.982129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:57.593 [2024-07-15 20:07:54.982144] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.982151] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.982159] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:57.593 [2024-07-15 20:07:54.982164] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:57.593 [2024-07-15 20:07:54.982170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:57.593 [2024-07-15 20:07:54.990129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:57.593 [2024-07-15 20:07:54.990139] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.990146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.990154] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.990159] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.990165] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.990170] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:57.593 [2024-07-15 20:07:54.990175] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:57.594 [2024-07-15 20:07:54.990179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:57.594 [2024-07-15 20:07:54.990184] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:57.594 [2024-07-15 20:07:54.990200] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:57.594 [2024-07-15 20:07:54.998130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:57.594 [2024-07-15 20:07:54.998144] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:57.594 [2024-07-15 20:07:55.006131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:57.594 [2024-07-15 20:07:55.006144] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:57.594 [2024-07-15 20:07:55.014127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:57.594 [2024-07-15 20:07:55.014141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:57.594 [2024-07-15 20:07:55.022129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:57.594 [2024-07-15 20:07:55.022145] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:57.594 [2024-07-15 20:07:55.022152] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:57.594 [2024-07-15 20:07:55.022156] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:57.594 [2024-07-15 20:07:55.022159] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:57.594 [2024-07-15 20:07:55.022165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:57.594 [2024-07-15 20:07:55.022173] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:57.594 [2024-07-15 20:07:55.022177] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:57.594 [2024-07-15 20:07:55.022183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:57.594 [2024-07-15 20:07:55.022190] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:57.594 [2024-07-15 20:07:55.022195] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:57.594 [2024-07-15 20:07:55.022201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:57.594 [2024-07-15 20:07:55.022208] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:57.594 [2024-07-15 20:07:55.022212] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:57.594 [2024-07-15 20:07:55.022218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:57.856 [2024-07-15 20:07:55.030129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:57.856 [2024-07-15 20:07:55.030144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:57.856 [2024-07-15 20:07:55.030155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:57.856 [2024-07-15 20:07:55.030162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:57.856 ===================================================== 00:12:57.856 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:57.856 ===================================================== 00:12:57.856 Controller Capabilities/Features 00:12:57.856 ================================ 00:12:57.856 Vendor ID: 4e58 00:12:57.856 Subsystem Vendor ID: 4e58 00:12:57.856 Serial Number: SPDK2 00:12:57.856 Model Number: SPDK bdev Controller 00:12:57.856 Firmware Version: 24.09 00:12:57.856 Recommended Arb Burst: 6 00:12:57.856 IEEE OUI Identifier: 8d 6b 50 00:12:57.856 Multi-path I/O 00:12:57.856 May have multiple subsystem ports: Yes 00:12:57.856 May have multiple controllers: Yes 00:12:57.856 Associated with SR-IOV VF: No 00:12:57.856 Max Data Transfer Size: 131072 00:12:57.856 Max Number of Namespaces: 32 00:12:57.856 Max Number of I/O Queues: 127 00:12:57.856 NVMe Specification Version (VS): 1.3 00:12:57.856 NVMe Specification Version (Identify): 1.3 00:12:57.856 Maximum Queue Entries: 256 00:12:57.856 Contiguous Queues Required: Yes 00:12:57.856 Arbitration Mechanisms Supported 00:12:57.856 Weighted Round Robin: Not Supported 00:12:57.856 Vendor Specific: Not Supported 00:12:57.856 Reset Timeout: 15000 ms 00:12:57.856 Doorbell Stride: 4 bytes 00:12:57.856 NVM Subsystem Reset: Not Supported 00:12:57.856 Command Sets Supported 00:12:57.856 NVM Command Set: Supported 00:12:57.856 Boot Partition: Not Supported 00:12:57.856 Memory Page Size Minimum: 4096 bytes 00:12:57.856 Memory Page Size Maximum: 4096 bytes 00:12:57.856 Persistent Memory Region: Not Supported 00:12:57.856 Optional Asynchronous Events Supported 00:12:57.856 Namespace Attribute Notices: Supported 00:12:57.856 Firmware Activation Notices: Not Supported 00:12:57.856 ANA Change Notices: Not Supported 00:12:57.856 PLE Aggregate Log Change Notices: Not Supported 00:12:57.856 LBA Status Info Alert Notices: Not Supported 00:12:57.856 EGE Aggregate Log Change Notices: Not Supported 00:12:57.856 Normal NVM Subsystem Shutdown event: Not Supported 00:12:57.856 Zone Descriptor Change Notices: Not Supported 00:12:57.856 Discovery Log Change Notices: Not Supported 00:12:57.856 Controller Attributes 00:12:57.856 128-bit Host Identifier: Supported 00:12:57.856 Non-Operational Permissive Mode: Not Supported 00:12:57.856 NVM Sets: Not Supported 00:12:57.856 Read Recovery Levels: Not Supported 00:12:57.856 Endurance Groups: Not Supported 00:12:57.856 Predictable Latency Mode: Not Supported 00:12:57.856 Traffic Based Keep ALive: Not Supported 00:12:57.856 Namespace Granularity: Not Supported 00:12:57.856 SQ Associations: Not Supported 00:12:57.856 UUID List: Not Supported 00:12:57.856 Multi-Domain Subsystem: Not Supported 00:12:57.856 Fixed Capacity Management: Not Supported 00:12:57.856 Variable Capacity Management: Not Supported 00:12:57.856 Delete Endurance Group: Not Supported 00:12:57.856 Delete NVM Set: Not Supported 00:12:57.856 Extended LBA Formats Supported: Not Supported 00:12:57.856 Flexible Data Placement Supported: Not Supported 00:12:57.856 00:12:57.856 Controller Memory Buffer Support 00:12:57.856 ================================ 00:12:57.856 Supported: No 00:12:57.856 00:12:57.856 Persistent Memory Region Support 00:12:57.856 ================================ 00:12:57.856 Supported: No 00:12:57.856 00:12:57.856 Admin Command Set Attributes 00:12:57.856 ============================ 00:12:57.856 Security Send/Receive: Not Supported 00:12:57.856 Format NVM: Not Supported 00:12:57.856 Firmware Activate/Download: Not Supported 00:12:57.856 Namespace Management: Not Supported 00:12:57.856 Device Self-Test: Not Supported 00:12:57.856 Directives: Not Supported 00:12:57.856 NVMe-MI: Not Supported 00:12:57.856 Virtualization Management: Not Supported 00:12:57.856 Doorbell Buffer Config: Not Supported 00:12:57.856 Get LBA Status Capability: Not Supported 00:12:57.856 Command & Feature Lockdown Capability: Not Supported 00:12:57.856 Abort Command Limit: 4 00:12:57.856 Async Event Request Limit: 4 00:12:57.856 Number of Firmware Slots: N/A 00:12:57.856 Firmware Slot 1 Read-Only: N/A 00:12:57.856 Firmware Activation Without Reset: N/A 00:12:57.856 Multiple Update Detection Support: N/A 00:12:57.856 Firmware Update Granularity: No Information Provided 00:12:57.856 Per-Namespace SMART Log: No 00:12:57.856 Asymmetric Namespace Access Log Page: Not Supported 00:12:57.856 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:57.856 Command Effects Log Page: Supported 00:12:57.856 Get Log Page Extended Data: Supported 00:12:57.856 Telemetry Log Pages: Not Supported 00:12:57.856 Persistent Event Log Pages: Not Supported 00:12:57.856 Supported Log Pages Log Page: May Support 00:12:57.856 Commands Supported & Effects Log Page: Not Supported 00:12:57.856 Feature Identifiers & Effects Log Page:May Support 00:12:57.856 NVMe-MI Commands & Effects Log Page: May Support 00:12:57.856 Data Area 4 for Telemetry Log: Not Supported 00:12:57.856 Error Log Page Entries Supported: 128 00:12:57.856 Keep Alive: Supported 00:12:57.856 Keep Alive Granularity: 10000 ms 00:12:57.856 00:12:57.856 NVM Command Set Attributes 00:12:57.856 ========================== 00:12:57.856 Submission Queue Entry Size 00:12:57.856 Max: 64 00:12:57.856 Min: 64 00:12:57.856 Completion Queue Entry Size 00:12:57.856 Max: 16 00:12:57.856 Min: 16 00:12:57.856 Number of Namespaces: 32 00:12:57.856 Compare Command: Supported 00:12:57.856 Write Uncorrectable Command: Not Supported 00:12:57.856 Dataset Management Command: Supported 00:12:57.856 Write Zeroes Command: Supported 00:12:57.856 Set Features Save Field: Not Supported 00:12:57.856 Reservations: Not Supported 00:12:57.856 Timestamp: Not Supported 00:12:57.856 Copy: Supported 00:12:57.856 Volatile Write Cache: Present 00:12:57.856 Atomic Write Unit (Normal): 1 00:12:57.856 Atomic Write Unit (PFail): 1 00:12:57.856 Atomic Compare & Write Unit: 1 00:12:57.856 Fused Compare & Write: Supported 00:12:57.856 Scatter-Gather List 00:12:57.856 SGL Command Set: Supported (Dword aligned) 00:12:57.856 SGL Keyed: Not Supported 00:12:57.856 SGL Bit Bucket Descriptor: Not Supported 00:12:57.856 SGL Metadata Pointer: Not Supported 00:12:57.856 Oversized SGL: Not Supported 00:12:57.856 SGL Metadata Address: Not Supported 00:12:57.856 SGL Offset: Not Supported 00:12:57.856 Transport SGL Data Block: Not Supported 00:12:57.856 Replay Protected Memory Block: Not Supported 00:12:57.856 00:12:57.856 Firmware Slot Information 00:12:57.856 ========================= 00:12:57.856 Active slot: 1 00:12:57.856 Slot 1 Firmware Revision: 24.09 00:12:57.856 00:12:57.856 00:12:57.856 Commands Supported and Effects 00:12:57.856 ============================== 00:12:57.856 Admin Commands 00:12:57.856 -------------- 00:12:57.856 Get Log Page (02h): Supported 00:12:57.856 Identify (06h): Supported 00:12:57.856 Abort (08h): Supported 00:12:57.856 Set Features (09h): Supported 00:12:57.856 Get Features (0Ah): Supported 00:12:57.856 Asynchronous Event Request (0Ch): Supported 00:12:57.856 Keep Alive (18h): Supported 00:12:57.856 I/O Commands 00:12:57.856 ------------ 00:12:57.856 Flush (00h): Supported LBA-Change 00:12:57.856 Write (01h): Supported LBA-Change 00:12:57.856 Read (02h): Supported 00:12:57.856 Compare (05h): Supported 00:12:57.856 Write Zeroes (08h): Supported LBA-Change 00:12:57.856 Dataset Management (09h): Supported LBA-Change 00:12:57.856 Copy (19h): Supported LBA-Change 00:12:57.856 00:12:57.856 Error Log 00:12:57.856 ========= 00:12:57.856 00:12:57.856 Arbitration 00:12:57.856 =========== 00:12:57.856 Arbitration Burst: 1 00:12:57.856 00:12:57.856 Power Management 00:12:57.856 ================ 00:12:57.856 Number of Power States: 1 00:12:57.856 Current Power State: Power State #0 00:12:57.856 Power State #0: 00:12:57.856 Max Power: 0.00 W 00:12:57.856 Non-Operational State: Operational 00:12:57.856 Entry Latency: Not Reported 00:12:57.856 Exit Latency: Not Reported 00:12:57.856 Relative Read Throughput: 0 00:12:57.856 Relative Read Latency: 0 00:12:57.856 Relative Write Throughput: 0 00:12:57.856 Relative Write Latency: 0 00:12:57.856 Idle Power: Not Reported 00:12:57.856 Active Power: Not Reported 00:12:57.856 Non-Operational Permissive Mode: Not Supported 00:12:57.856 00:12:57.856 Health Information 00:12:57.856 ================== 00:12:57.856 Critical Warnings: 00:12:57.856 Available Spare Space: OK 00:12:57.856 Temperature: OK 00:12:57.856 Device Reliability: OK 00:12:57.857 Read Only: No 00:12:57.857 Volatile Memory Backup: OK 00:12:57.857 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:57.857 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:57.857 Available Spare: 0% 00:12:57.857 Available Sp[2024-07-15 20:07:55.030258] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:57.857 [2024-07-15 20:07:55.037643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:57.857 [2024-07-15 20:07:55.037679] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:57.857 [2024-07-15 20:07:55.037735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.857 [2024-07-15 20:07:55.037742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.857 [2024-07-15 20:07:55.037748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.857 [2024-07-15 20:07:55.037754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.857 [2024-07-15 20:07:55.038182] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:57.857 [2024-07-15 20:07:55.038194] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:57.857 [2024-07-15 20:07:55.039187] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:57.857 [2024-07-15 20:07:55.039236] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:57.857 [2024-07-15 20:07:55.039246] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:57.857 [2024-07-15 20:07:55.040189] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:57.857 [2024-07-15 20:07:55.040202] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:57.857 [2024-07-15 20:07:55.040254] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:57.857 [2024-07-15 20:07:55.041630] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:57.857 are Threshold: 0% 00:12:57.857 Life Percentage Used: 0% 00:12:57.857 Data Units Read: 0 00:12:57.857 Data Units Written: 0 00:12:57.857 Host Read Commands: 0 00:12:57.857 Host Write Commands: 0 00:12:57.857 Controller Busy Time: 0 minutes 00:12:57.857 Power Cycles: 0 00:12:57.857 Power On Hours: 0 hours 00:12:57.857 Unsafe Shutdowns: 0 00:12:57.857 Unrecoverable Media Errors: 0 00:12:57.857 Lifetime Error Log Entries: 0 00:12:57.857 Warning Temperature Time: 0 minutes 00:12:57.857 Critical Temperature Time: 0 minutes 00:12:57.857 00:12:57.857 Number of Queues 00:12:57.857 ================ 00:12:57.857 Number of I/O Submission Queues: 127 00:12:57.857 Number of I/O Completion Queues: 127 00:12:57.857 00:12:57.857 Active Namespaces 00:12:57.857 ================= 00:12:57.857 Namespace ID:1 00:12:57.857 Error Recovery Timeout: Unlimited 00:12:57.857 Command Set Identifier: NVM (00h) 00:12:57.857 Deallocate: Supported 00:12:57.857 Deallocated/Unwritten Error: Not Supported 00:12:57.857 Deallocated Read Value: Unknown 00:12:57.857 Deallocate in Write Zeroes: Not Supported 00:12:57.857 Deallocated Guard Field: 0xFFFF 00:12:57.857 Flush: Supported 00:12:57.857 Reservation: Supported 00:12:57.857 Namespace Sharing Capabilities: Multiple Controllers 00:12:57.857 Size (in LBAs): 131072 (0GiB) 00:12:57.857 Capacity (in LBAs): 131072 (0GiB) 00:12:57.857 Utilization (in LBAs): 131072 (0GiB) 00:12:57.857 NGUID: 3BAB77430C3E4246A68CE758C3EF6834 00:12:57.857 UUID: 3bab7743-0c3e-4246-a68c-e758c3ef6834 00:12:57.857 Thin Provisioning: Not Supported 00:12:57.857 Per-NS Atomic Units: Yes 00:12:57.857 Atomic Boundary Size (Normal): 0 00:12:57.857 Atomic Boundary Size (PFail): 0 00:12:57.857 Atomic Boundary Offset: 0 00:12:57.857 Maximum Single Source Range Length: 65535 00:12:57.857 Maximum Copy Length: 65535 00:12:57.857 Maximum Source Range Count: 1 00:12:57.857 NGUID/EUI64 Never Reused: No 00:12:57.857 Namespace Write Protected: No 00:12:57.857 Number of LBA Formats: 1 00:12:57.857 Current LBA Format: LBA Format #00 00:12:57.857 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:57.857 00:12:57.857 20:07:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:57.857 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.857 [2024-07-15 20:07:55.225136] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:03.147 Initializing NVMe Controllers 00:13:03.147 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:03.147 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:03.147 Initialization complete. Launching workers. 00:13:03.147 ======================================================== 00:13:03.147 Latency(us) 00:13:03.147 Device Information : IOPS MiB/s Average min max 00:13:03.147 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40023.60 156.34 3200.49 833.38 6866.10 00:13:03.147 ======================================================== 00:13:03.147 Total : 40023.60 156.34 3200.49 833.38 6866.10 00:13:03.147 00:13:03.147 [2024-07-15 20:08:00.333314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:03.147 20:08:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:03.147 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.147 [2024-07-15 20:08:00.515867] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:08.467 Initializing NVMe Controllers 00:13:08.467 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:08.467 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:08.467 Initialization complete. Launching workers. 00:13:08.467 ======================================================== 00:13:08.467 Latency(us) 00:13:08.467 Device Information : IOPS MiB/s Average min max 00:13:08.467 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35383.34 138.22 3617.28 1105.56 9776.70 00:13:08.467 ======================================================== 00:13:08.467 Total : 35383.34 138.22 3617.28 1105.56 9776.70 00:13:08.467 00:13:08.467 [2024-07-15 20:08:05.534856] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:08.467 20:08:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:08.467 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.467 [2024-07-15 20:08:05.723980] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:13.759 [2024-07-15 20:08:10.864197] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:13.759 Initializing NVMe Controllers 00:13:13.759 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:13.759 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:13.759 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:13.759 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:13.759 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:13.759 Initialization complete. Launching workers. 00:13:13.759 Starting thread on core 2 00:13:13.759 Starting thread on core 3 00:13:13.759 Starting thread on core 1 00:13:13.759 20:08:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:13.759 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.759 [2024-07-15 20:08:11.120565] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:17.060 [2024-07-15 20:08:14.159934] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:17.060 Initializing NVMe Controllers 00:13:17.060 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:17.060 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:17.060 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:17.060 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:17.060 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:17.060 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:17.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:17.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:17.060 Initialization complete. Launching workers. 00:13:17.060 Starting thread on core 1 with urgent priority queue 00:13:17.060 Starting thread on core 2 with urgent priority queue 00:13:17.060 Starting thread on core 3 with urgent priority queue 00:13:17.060 Starting thread on core 0 with urgent priority queue 00:13:17.060 SPDK bdev Controller (SPDK2 ) core 0: 12271.00 IO/s 8.15 secs/100000 ios 00:13:17.060 SPDK bdev Controller (SPDK2 ) core 1: 14980.67 IO/s 6.68 secs/100000 ios 00:13:17.060 SPDK bdev Controller (SPDK2 ) core 2: 7965.33 IO/s 12.55 secs/100000 ios 00:13:17.060 SPDK bdev Controller (SPDK2 ) core 3: 9352.67 IO/s 10.69 secs/100000 ios 00:13:17.060 ======================================================== 00:13:17.060 00:13:17.060 20:08:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:17.060 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.060 [2024-07-15 20:08:14.430554] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:17.060 Initializing NVMe Controllers 00:13:17.060 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:17.060 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:17.060 Namespace ID: 1 size: 0GB 00:13:17.060 Initialization complete. 00:13:17.060 INFO: using host memory buffer for IO 00:13:17.060 Hello world! 00:13:17.060 [2024-07-15 20:08:14.442638] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:17.060 20:08:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:17.320 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.320 [2024-07-15 20:08:14.700372] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:18.707 Initializing NVMe Controllers 00:13:18.707 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:18.707 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:18.707 Initialization complete. Launching workers. 00:13:18.707 submit (in ns) avg, min, max = 10244.3, 3891.7, 4000002.5 00:13:18.707 complete (in ns) avg, min, max = 15278.1, 2370.0, 4157521.7 00:13:18.707 00:13:18.707 Submit histogram 00:13:18.707 ================ 00:13:18.707 Range in us Cumulative Count 00:13:18.707 3.867 - 3.893: 0.0052% ( 1) 00:13:18.707 3.893 - 3.920: 1.5172% ( 293) 00:13:18.707 3.920 - 3.947: 7.5808% ( 1175) 00:13:18.707 3.947 - 3.973: 16.3330% ( 1696) 00:13:18.707 3.973 - 4.000: 27.1958% ( 2105) 00:13:18.707 4.000 - 4.027: 37.8780% ( 2070) 00:13:18.707 4.027 - 4.053: 49.2620% ( 2206) 00:13:18.707 4.053 - 4.080: 64.4958% ( 2952) 00:13:18.707 4.080 - 4.107: 79.2445% ( 2858) 00:13:18.707 4.107 - 4.133: 90.5666% ( 2194) 00:13:18.707 4.133 - 4.160: 96.0780% ( 1068) 00:13:18.707 4.160 - 4.187: 98.2919% ( 429) 00:13:18.707 4.187 - 4.213: 98.9576% ( 129) 00:13:18.707 4.213 - 4.240: 99.2156% ( 50) 00:13:18.707 4.240 - 4.267: 99.3033% ( 17) 00:13:18.707 4.267 - 4.293: 99.3240% ( 4) 00:13:18.707 4.293 - 4.320: 99.3446% ( 4) 00:13:18.707 4.320 - 4.347: 99.3549% ( 2) 00:13:18.707 4.347 - 4.373: 99.3601% ( 1) 00:13:18.707 4.373 - 4.400: 99.3653% ( 1) 00:13:18.707 4.400 - 4.427: 99.3704% ( 1) 00:13:18.707 4.453 - 4.480: 99.3756% ( 1) 00:13:18.707 4.480 - 4.507: 99.3807% ( 1) 00:13:18.707 4.533 - 4.560: 99.3859% ( 1) 00:13:18.707 4.560 - 4.587: 99.3911% ( 1) 00:13:18.707 4.640 - 4.667: 99.3962% ( 1) 00:13:18.707 4.693 - 4.720: 99.4014% ( 1) 00:13:18.707 4.720 - 4.747: 99.4065% ( 1) 00:13:18.707 4.800 - 4.827: 99.4169% ( 2) 00:13:18.707 4.827 - 4.853: 99.4220% ( 1) 00:13:18.707 5.227 - 5.253: 99.4272% ( 1) 00:13:18.707 5.333 - 5.360: 99.4323% ( 1) 00:13:18.707 5.413 - 5.440: 99.4427% ( 2) 00:13:18.707 5.440 - 5.467: 99.4478% ( 1) 00:13:18.707 5.547 - 5.573: 99.4530% ( 1) 00:13:18.707 5.627 - 5.653: 99.4581% ( 1) 00:13:18.707 5.653 - 5.680: 99.4685% ( 2) 00:13:18.707 5.760 - 5.787: 99.4736% ( 1) 00:13:18.707 5.840 - 5.867: 99.4788% ( 1) 00:13:18.707 6.000 - 6.027: 99.4943% ( 3) 00:13:18.707 6.080 - 6.107: 99.5046% ( 2) 00:13:18.707 6.107 - 6.133: 99.5098% ( 1) 00:13:18.707 6.133 - 6.160: 99.5149% ( 1) 00:13:18.707 6.160 - 6.187: 99.5201% ( 1) 00:13:18.707 6.187 - 6.213: 99.5252% ( 1) 00:13:18.707 6.240 - 6.267: 99.5304% ( 1) 00:13:18.707 6.267 - 6.293: 99.5356% ( 1) 00:13:18.707 6.293 - 6.320: 99.5407% ( 1) 00:13:18.707 6.320 - 6.347: 99.5459% ( 1) 00:13:18.707 6.373 - 6.400: 99.5510% ( 1) 00:13:18.707 6.427 - 6.453: 99.5562% ( 1) 00:13:18.707 6.480 - 6.507: 99.5665% ( 2) 00:13:18.707 6.507 - 6.533: 99.5872% ( 4) 00:13:18.707 6.533 - 6.560: 99.5975% ( 2) 00:13:18.707 6.560 - 6.587: 99.6026% ( 1) 00:13:18.707 6.587 - 6.613: 99.6130% ( 2) 00:13:18.707 6.613 - 6.640: 99.6181% ( 1) 00:13:18.707 6.667 - 6.693: 99.6336% ( 3) 00:13:18.707 6.693 - 6.720: 99.6388% ( 1) 00:13:18.707 6.720 - 6.747: 99.6646% ( 5) 00:13:18.707 6.773 - 6.800: 99.6800% ( 3) 00:13:18.707 6.827 - 6.880: 99.6904% ( 2) 00:13:18.707 6.880 - 6.933: 99.7007% ( 2) 00:13:18.707 7.093 - 7.147: 99.7110% ( 2) 00:13:18.707 7.200 - 7.253: 99.7162% ( 1) 00:13:18.707 7.253 - 7.307: 99.7368% ( 4) 00:13:18.707 7.307 - 7.360: 99.7420% ( 1) 00:13:18.707 7.413 - 7.467: 99.7523% ( 2) 00:13:18.707 7.467 - 7.520: 99.7729% ( 4) 00:13:18.707 7.520 - 7.573: 99.7781% ( 1) 00:13:18.707 7.573 - 7.627: 99.7884% ( 2) 00:13:18.707 7.680 - 7.733: 99.7987% ( 2) 00:13:18.707 7.840 - 7.893: 99.8091% ( 2) 00:13:18.707 8.053 - 8.107: 99.8142% ( 1) 00:13:18.707 8.853 - 8.907: 99.8194% ( 1) 00:13:18.707 9.067 - 9.120: 99.8245% ( 1) 00:13:18.707 11.307 - 11.360: 99.8297% ( 1) 00:13:18.707 12.480 - 12.533: 99.8349% ( 1) 00:13:18.707 12.693 - 12.747: 99.8400% ( 1) 00:13:18.707 [2024-07-15 20:08:15.795929] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:18.707 13.280 - 13.333: 99.8452% ( 1) 00:13:18.707 3986.773 - 4014.080: 100.0000% ( 30) 00:13:18.707 00:13:18.707 Complete histogram 00:13:18.707 ================== 00:13:18.707 Range in us Cumulative Count 00:13:18.707 2.360 - 2.373: 0.0052% ( 1) 00:13:18.707 2.373 - 2.387: 0.0619% ( 11) 00:13:18.707 2.387 - 2.400: 1.0063% ( 183) 00:13:18.707 2.400 - 2.413: 1.0682% ( 12) 00:13:18.707 2.413 - 2.427: 1.2230% ( 30) 00:13:18.707 2.427 - 2.440: 1.2540% ( 6) 00:13:18.707 2.440 - 2.453: 1.5998% ( 67) 00:13:18.707 2.453 - 2.467: 52.5751% ( 9878) 00:13:18.707 2.467 - 2.480: 59.5882% ( 1359) 00:13:18.707 2.480 - 2.493: 73.8105% ( 2756) 00:13:18.707 2.493 - 2.507: 80.6172% ( 1319) 00:13:18.707 2.507 - 2.520: 82.5266% ( 370) 00:13:18.707 2.520 - 2.533: 86.3453% ( 740) 00:13:18.707 2.533 - 2.547: 91.7793% ( 1053) 00:13:18.707 2.547 - 2.560: 95.4433% ( 710) 00:13:18.707 2.560 - 2.573: 97.6313% ( 424) 00:13:18.707 2.573 - 2.587: 98.8647% ( 239) 00:13:18.707 2.587 - 2.600: 99.3033% ( 85) 00:13:18.707 2.600 - 2.613: 99.4169% ( 22) 00:13:18.707 2.613 - 2.627: 99.4272% ( 2) 00:13:18.707 2.627 - 2.640: 99.4323% ( 1) 00:13:18.707 2.653 - 2.667: 99.4375% ( 1) 00:13:18.707 4.373 - 4.400: 99.4478% ( 2) 00:13:18.707 4.400 - 4.427: 99.4530% ( 1) 00:13:18.707 4.533 - 4.560: 99.4581% ( 1) 00:13:18.707 4.560 - 4.587: 99.4633% ( 1) 00:13:18.707 4.587 - 4.613: 99.4685% ( 1) 00:13:18.707 4.613 - 4.640: 99.4736% ( 1) 00:13:18.707 4.693 - 4.720: 99.4788% ( 1) 00:13:18.707 4.747 - 4.773: 99.4840% ( 1) 00:13:18.707 4.773 - 4.800: 99.4891% ( 1) 00:13:18.707 4.800 - 4.827: 99.4994% ( 2) 00:13:18.707 4.853 - 4.880: 99.5046% ( 1) 00:13:18.707 4.880 - 4.907: 99.5098% ( 1) 00:13:18.707 4.907 - 4.933: 99.5149% ( 1) 00:13:18.707 4.960 - 4.987: 99.5201% ( 1) 00:13:18.707 5.013 - 5.040: 99.5252% ( 1) 00:13:18.707 5.040 - 5.067: 99.5407% ( 3) 00:13:18.707 5.067 - 5.093: 99.5459% ( 1) 00:13:18.707 5.093 - 5.120: 99.5510% ( 1) 00:13:18.707 5.147 - 5.173: 99.5562% ( 1) 00:13:18.707 5.173 - 5.200: 99.5614% ( 1) 00:13:18.707 5.227 - 5.253: 99.5665% ( 1) 00:13:18.707 5.280 - 5.307: 99.5768% ( 2) 00:13:18.707 5.360 - 5.387: 99.5820% ( 1) 00:13:18.707 5.467 - 5.493: 99.5872% ( 1) 00:13:18.707 5.493 - 5.520: 99.5923% ( 1) 00:13:18.707 5.520 - 5.547: 99.5975% ( 1) 00:13:18.707 5.600 - 5.627: 99.6026% ( 1) 00:13:18.707 5.627 - 5.653: 99.6130% ( 2) 00:13:18.707 5.707 - 5.733: 99.6181% ( 1) 00:13:18.707 5.947 - 5.973: 99.6284% ( 2) 00:13:18.707 6.000 - 6.027: 99.6336% ( 1) 00:13:18.707 6.133 - 6.160: 99.6388% ( 1) 00:13:18.707 6.160 - 6.187: 99.6439% ( 1) 00:13:18.707 6.240 - 6.267: 99.6491% ( 1) 00:13:18.707 6.533 - 6.560: 99.6542% ( 1) 00:13:18.707 7.520 - 7.573: 99.6594% ( 1) 00:13:18.707 10.880 - 10.933: 99.6646% ( 1) 00:13:18.707 10.987 - 11.040: 99.6697% ( 1) 00:13:18.707 12.427 - 12.480: 99.6749% ( 1) 00:13:18.707 13.493 - 13.547: 99.6800% ( 1) 00:13:18.708 3986.773 - 4014.080: 99.9948% ( 61) 00:13:18.708 4150.613 - 4177.920: 100.0000% ( 1) 00:13:18.708 00:13:18.708 20:08:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:18.708 20:08:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:18.708 20:08:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:18.708 20:08:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:18.708 20:08:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:18.708 [ 00:13:18.708 { 00:13:18.708 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:18.708 "subtype": "Discovery", 00:13:18.708 "listen_addresses": [], 00:13:18.708 "allow_any_host": true, 00:13:18.708 "hosts": [] 00:13:18.708 }, 00:13:18.708 { 00:13:18.708 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:18.708 "subtype": "NVMe", 00:13:18.708 "listen_addresses": [ 00:13:18.708 { 00:13:18.708 "trtype": "VFIOUSER", 00:13:18.708 "adrfam": "IPv4", 00:13:18.708 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:18.708 "trsvcid": "0" 00:13:18.708 } 00:13:18.708 ], 00:13:18.708 "allow_any_host": true, 00:13:18.708 "hosts": [], 00:13:18.708 "serial_number": "SPDK1", 00:13:18.708 "model_number": "SPDK bdev Controller", 00:13:18.708 "max_namespaces": 32, 00:13:18.708 "min_cntlid": 1, 00:13:18.708 "max_cntlid": 65519, 00:13:18.708 "namespaces": [ 00:13:18.708 { 00:13:18.708 "nsid": 1, 00:13:18.708 "bdev_name": "Malloc1", 00:13:18.708 "name": "Malloc1", 00:13:18.708 "nguid": "C406BE85750E4D9D95B581EF07FACA09", 00:13:18.708 "uuid": "c406be85-750e-4d9d-95b5-81ef07faca09" 00:13:18.708 }, 00:13:18.708 { 00:13:18.708 "nsid": 2, 00:13:18.708 "bdev_name": "Malloc3", 00:13:18.708 "name": "Malloc3", 00:13:18.708 "nguid": "632C582A3A1A4D9AB53C2A58E38C3AA6", 00:13:18.708 "uuid": "632c582a-3a1a-4d9a-b53c-2a58e38c3aa6" 00:13:18.708 } 00:13:18.708 ] 00:13:18.708 }, 00:13:18.708 { 00:13:18.708 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:18.708 "subtype": "NVMe", 00:13:18.708 "listen_addresses": [ 00:13:18.708 { 00:13:18.708 "trtype": "VFIOUSER", 00:13:18.708 "adrfam": "IPv4", 00:13:18.708 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:18.708 "trsvcid": "0" 00:13:18.708 } 00:13:18.708 ], 00:13:18.708 "allow_any_host": true, 00:13:18.708 "hosts": [], 00:13:18.708 "serial_number": "SPDK2", 00:13:18.708 "model_number": "SPDK bdev Controller", 00:13:18.708 "max_namespaces": 32, 00:13:18.708 "min_cntlid": 1, 00:13:18.708 "max_cntlid": 65519, 00:13:18.708 "namespaces": [ 00:13:18.708 { 00:13:18.708 "nsid": 1, 00:13:18.708 "bdev_name": "Malloc2", 00:13:18.708 "name": "Malloc2", 00:13:18.708 "nguid": "3BAB77430C3E4246A68CE758C3EF6834", 00:13:18.708 "uuid": "3bab7743-0c3e-4246-a68c-e758c3ef6834" 00:13:18.708 } 00:13:18.708 ] 00:13:18.708 } 00:13:18.708 ] 00:13:18.708 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:18.708 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=902483 00:13:18.708 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:18.708 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:18.708 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:18.708 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:18.708 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:18.708 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:18.708 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:18.708 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:18.708 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.969 Malloc4 00:13:18.969 [2024-07-15 20:08:16.180054] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:18.969 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:18.969 [2024-07-15 20:08:16.349216] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:18.969 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:18.969 Asynchronous Event Request test 00:13:18.969 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:18.969 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:18.969 Registering asynchronous event callbacks... 00:13:18.969 Starting namespace attribute notice tests for all controllers... 00:13:18.969 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:18.969 aer_cb - Changed Namespace 00:13:18.969 Cleaning up... 00:13:19.230 [ 00:13:19.230 { 00:13:19.230 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:19.230 "subtype": "Discovery", 00:13:19.230 "listen_addresses": [], 00:13:19.230 "allow_any_host": true, 00:13:19.230 "hosts": [] 00:13:19.230 }, 00:13:19.230 { 00:13:19.230 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:19.230 "subtype": "NVMe", 00:13:19.230 "listen_addresses": [ 00:13:19.230 { 00:13:19.230 "trtype": "VFIOUSER", 00:13:19.230 "adrfam": "IPv4", 00:13:19.230 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:19.230 "trsvcid": "0" 00:13:19.230 } 00:13:19.230 ], 00:13:19.230 "allow_any_host": true, 00:13:19.230 "hosts": [], 00:13:19.230 "serial_number": "SPDK1", 00:13:19.230 "model_number": "SPDK bdev Controller", 00:13:19.230 "max_namespaces": 32, 00:13:19.230 "min_cntlid": 1, 00:13:19.230 "max_cntlid": 65519, 00:13:19.230 "namespaces": [ 00:13:19.230 { 00:13:19.230 "nsid": 1, 00:13:19.230 "bdev_name": "Malloc1", 00:13:19.230 "name": "Malloc1", 00:13:19.230 "nguid": "C406BE85750E4D9D95B581EF07FACA09", 00:13:19.230 "uuid": "c406be85-750e-4d9d-95b5-81ef07faca09" 00:13:19.230 }, 00:13:19.230 { 00:13:19.230 "nsid": 2, 00:13:19.230 "bdev_name": "Malloc3", 00:13:19.230 "name": "Malloc3", 00:13:19.230 "nguid": "632C582A3A1A4D9AB53C2A58E38C3AA6", 00:13:19.230 "uuid": "632c582a-3a1a-4d9a-b53c-2a58e38c3aa6" 00:13:19.230 } 00:13:19.230 ] 00:13:19.230 }, 00:13:19.230 { 00:13:19.230 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:19.230 "subtype": "NVMe", 00:13:19.230 "listen_addresses": [ 00:13:19.230 { 00:13:19.230 "trtype": "VFIOUSER", 00:13:19.230 "adrfam": "IPv4", 00:13:19.230 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:19.230 "trsvcid": "0" 00:13:19.230 } 00:13:19.230 ], 00:13:19.230 "allow_any_host": true, 00:13:19.230 "hosts": [], 00:13:19.230 "serial_number": "SPDK2", 00:13:19.230 "model_number": "SPDK bdev Controller", 00:13:19.230 "max_namespaces": 32, 00:13:19.230 "min_cntlid": 1, 00:13:19.230 "max_cntlid": 65519, 00:13:19.230 "namespaces": [ 00:13:19.230 { 00:13:19.230 "nsid": 1, 00:13:19.230 "bdev_name": "Malloc2", 00:13:19.230 "name": "Malloc2", 00:13:19.230 "nguid": "3BAB77430C3E4246A68CE758C3EF6834", 00:13:19.230 "uuid": "3bab7743-0c3e-4246-a68c-e758c3ef6834" 00:13:19.230 }, 00:13:19.230 { 00:13:19.230 "nsid": 2, 00:13:19.230 "bdev_name": "Malloc4", 00:13:19.230 "name": "Malloc4", 00:13:19.231 "nguid": "08D37707135F428CB7F000B6C6769CF5", 00:13:19.231 "uuid": "08d37707-135f-428c-b7f0-00b6c6769cf5" 00:13:19.231 } 00:13:19.231 ] 00:13:19.231 } 00:13:19.231 ] 00:13:19.231 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 902483 00:13:19.231 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:19.231 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 893446 00:13:19.231 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 893446 ']' 00:13:19.231 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 893446 00:13:19.231 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:19.231 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:19.231 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 893446 00:13:19.231 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:19.231 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:19.231 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 893446' 00:13:19.231 killing process with pid 893446 00:13:19.231 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 893446 00:13:19.231 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 893446 00:13:19.492 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:19.492 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:19.492 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:19.492 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:19.492 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:19.492 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=902557 00:13:19.492 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 902557' 00:13:19.492 Process pid: 902557 00:13:19.492 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:19.492 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:19.492 20:08:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 902557 00:13:19.492 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 902557 ']' 00:13:19.492 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.492 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:19.492 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.492 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:19.492 20:08:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:19.492 [2024-07-15 20:08:16.821613] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:19.492 [2024-07-15 20:08:16.822542] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:13:19.492 [2024-07-15 20:08:16.822586] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.492 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.492 [2024-07-15 20:08:16.883336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:19.752 [2024-07-15 20:08:16.949295] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.752 [2024-07-15 20:08:16.949337] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.752 [2024-07-15 20:08:16.949344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.752 [2024-07-15 20:08:16.949351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.752 [2024-07-15 20:08:16.949356] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.752 [2024-07-15 20:08:16.949498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.752 [2024-07-15 20:08:16.949615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.752 [2024-07-15 20:08:16.949772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.752 [2024-07-15 20:08:16.949774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.752 [2024-07-15 20:08:17.014305] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:19.752 [2024-07-15 20:08:17.014311] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:19.752 [2024-07-15 20:08:17.015464] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:19.752 [2024-07-15 20:08:17.015790] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:19.752 [2024-07-15 20:08:17.015885] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:20.324 20:08:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:20.324 20:08:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:20.324 20:08:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:21.266 20:08:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:21.528 20:08:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:21.528 20:08:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:21.528 20:08:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:21.528 20:08:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:21.528 20:08:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:21.528 Malloc1 00:13:21.528 20:08:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:21.791 20:08:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:22.053 20:08:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:22.053 20:08:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:22.053 20:08:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:22.053 20:08:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:22.314 Malloc2 00:13:22.314 20:08:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:22.575 20:08:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:22.575 20:08:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:22.836 20:08:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:22.836 20:08:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 902557 00:13:22.836 20:08:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 902557 ']' 00:13:22.836 20:08:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 902557 00:13:22.836 20:08:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:22.836 20:08:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:22.836 20:08:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 902557 00:13:22.836 20:08:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:22.836 20:08:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:22.836 20:08:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 902557' 00:13:22.836 killing process with pid 902557 00:13:22.836 20:08:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 902557 00:13:22.836 20:08:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 902557 00:13:23.098 20:08:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:23.098 20:08:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:23.098 00:13:23.098 real 0m50.464s 00:13:23.098 user 3m20.128s 00:13:23.098 sys 0m2.962s 00:13:23.098 20:08:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:23.098 20:08:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:23.098 ************************************ 00:13:23.098 END TEST nvmf_vfio_user 00:13:23.098 ************************************ 00:13:23.098 20:08:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:23.098 20:08:20 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:23.098 20:08:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:23.098 20:08:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.098 20:08:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:23.098 ************************************ 00:13:23.098 START TEST nvmf_vfio_user_nvme_compliance 00:13:23.098 ************************************ 00:13:23.098 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:23.098 * Looking for test storage... 00:13:23.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:23.098 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.098 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:23.098 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.098 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.098 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.098 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:23.099 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:23.360 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:23.360 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:23.360 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:23.360 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:23.360 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:23.360 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=903306 00:13:23.360 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 903306' 00:13:23.360 Process pid: 903306 00:13:23.360 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:23.360 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:23.360 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 903306 00:13:23.360 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 903306 ']' 00:13:23.360 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.360 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.360 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.360 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.360 20:08:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:23.360 [2024-07-15 20:08:20.597694] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:13:23.360 [2024-07-15 20:08:20.597750] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.360 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.360 [2024-07-15 20:08:20.657982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:23.360 [2024-07-15 20:08:20.722780] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.360 [2024-07-15 20:08:20.722818] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.360 [2024-07-15 20:08:20.722825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.360 [2024-07-15 20:08:20.722832] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.360 [2024-07-15 20:08:20.722837] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.360 [2024-07-15 20:08:20.722979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.360 [2024-07-15 20:08:20.723089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.360 [2024-07-15 20:08:20.723092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.931 20:08:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.931 20:08:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:24.191 20:08:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:25.130 malloc0 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.130 20:08:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:25.130 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.130 00:13:25.130 00:13:25.130 CUnit - A unit testing framework for C - Version 2.1-3 00:13:25.130 http://cunit.sourceforge.net/ 00:13:25.130 00:13:25.130 00:13:25.130 Suite: nvme_compliance 00:13:25.390 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 20:08:22.602615] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.390 [2024-07-15 20:08:22.603950] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:25.390 [2024-07-15 20:08:22.603961] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:25.390 [2024-07-15 20:08:22.603966] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:25.390 [2024-07-15 20:08:22.605631] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.390 passed 00:13:25.390 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 20:08:22.701240] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.390 [2024-07-15 20:08:22.704256] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.390 passed 00:13:25.390 Test: admin_identify_ns ...[2024-07-15 20:08:22.799384] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.649 [2024-07-15 20:08:22.860134] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:25.649 [2024-07-15 20:08:22.871136] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:25.649 [2024-07-15 20:08:22.892239] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.649 passed 00:13:25.649 Test: admin_get_features_mandatory_features ...[2024-07-15 20:08:22.983906] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.649 [2024-07-15 20:08:22.986927] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.649 passed 00:13:25.649 Test: admin_get_features_optional_features ...[2024-07-15 20:08:23.081443] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.909 [2024-07-15 20:08:23.084458] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.909 passed 00:13:25.909 Test: admin_set_features_number_of_queues ...[2024-07-15 20:08:23.176370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.909 [2024-07-15 20:08:23.281223] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.909 passed 00:13:26.169 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 20:08:23.376281] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.169 [2024-07-15 20:08:23.379303] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.169 passed 00:13:26.169 Test: admin_get_log_page_with_lpo ...[2024-07-15 20:08:23.472425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.169 [2024-07-15 20:08:23.540139] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:26.169 [2024-07-15 20:08:23.553185] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.169 passed 00:13:26.429 Test: fabric_property_get ...[2024-07-15 20:08:23.647270] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.429 [2024-07-15 20:08:23.648509] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:26.429 [2024-07-15 20:08:23.650287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.429 passed 00:13:26.429 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 20:08:23.744822] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.429 [2024-07-15 20:08:23.746084] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:26.429 [2024-07-15 20:08:23.747849] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.429 passed 00:13:26.429 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 20:08:23.839378] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.689 [2024-07-15 20:08:23.923130] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:26.689 [2024-07-15 20:08:23.939129] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:26.689 [2024-07-15 20:08:23.944219] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.689 passed 00:13:26.689 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 20:08:24.038237] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.689 [2024-07-15 20:08:24.039485] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:26.689 [2024-07-15 20:08:24.041262] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.689 passed 00:13:26.950 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 20:08:24.134377] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.950 [2024-07-15 20:08:24.214141] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:26.950 [2024-07-15 20:08:24.238134] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:26.950 [2024-07-15 20:08:24.243216] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.950 passed 00:13:26.950 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 20:08:24.333849] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.950 [2024-07-15 20:08:24.335108] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:26.950 [2024-07-15 20:08:24.335134] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:26.950 [2024-07-15 20:08:24.336869] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.950 passed 00:13:27.248 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 20:08:24.430371] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:27.248 [2024-07-15 20:08:24.526130] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:27.248 [2024-07-15 20:08:24.534128] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:27.248 [2024-07-15 20:08:24.542128] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:27.248 [2024-07-15 20:08:24.550141] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:27.248 [2024-07-15 20:08:24.579216] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:27.248 passed 00:13:27.512 Test: admin_create_io_sq_verify_pc ...[2024-07-15 20:08:24.668827] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:27.512 [2024-07-15 20:08:24.684140] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:27.512 [2024-07-15 20:08:24.701971] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:27.512 passed 00:13:27.512 Test: admin_create_io_qp_max_qps ...[2024-07-15 20:08:24.795480] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:28.897 [2024-07-15 20:08:25.911132] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:28.897 [2024-07-15 20:08:26.303471] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.158 passed 00:13:29.158 Test: admin_create_io_sq_shared_cq ...[2024-07-15 20:08:26.397386] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.158 [2024-07-15 20:08:26.529132] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:29.158 [2024-07-15 20:08:26.566184] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.419 passed 00:13:29.419 00:13:29.419 Run Summary: Type Total Ran Passed Failed Inactive 00:13:29.419 suites 1 1 n/a 0 0 00:13:29.419 tests 18 18 18 0 0 00:13:29.419 asserts 360 360 360 0 n/a 00:13:29.419 00:13:29.419 Elapsed time = 1.664 seconds 00:13:29.419 20:08:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 903306 00:13:29.419 20:08:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 903306 ']' 00:13:29.419 20:08:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 903306 00:13:29.419 20:08:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:29.419 20:08:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:29.419 20:08:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 903306 00:13:29.419 20:08:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:29.420 20:08:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:29.420 20:08:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 903306' 00:13:29.420 killing process with pid 903306 00:13:29.420 20:08:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 903306 00:13:29.420 20:08:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 903306 00:13:29.420 20:08:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:29.420 20:08:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:29.420 00:13:29.420 real 0m6.412s 00:13:29.420 user 0m18.362s 00:13:29.420 sys 0m0.438s 00:13:29.420 20:08:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:29.420 20:08:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:29.420 ************************************ 00:13:29.420 END TEST nvmf_vfio_user_nvme_compliance 00:13:29.420 ************************************ 00:13:29.681 20:08:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:29.681 20:08:26 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:29.681 20:08:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:29.681 20:08:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.681 20:08:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:29.681 ************************************ 00:13:29.681 START TEST nvmf_vfio_user_fuzz 00:13:29.681 ************************************ 00:13:29.681 20:08:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:29.681 * Looking for test storage... 00:13:29.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:29.681 20:08:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.681 20:08:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:29.681 20:08:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.681 20:08:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.681 20:08:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.681 20:08:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.681 20:08:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.681 20:08:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.681 20:08:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.681 20:08:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.681 20:08:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.681 20:08:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.681 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:29.681 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:29.681 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.681 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.681 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.681 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.681 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:29.681 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.681 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.681 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.681 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.681 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.681 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.681 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=904702 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 904702' 00:13:29.682 Process pid: 904702 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 904702 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 904702 ']' 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.682 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:30.624 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.624 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:30.624 20:08:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:31.567 malloc0 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:31.567 20:08:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:03.682 Fuzzing completed. Shutting down the fuzz application 00:14:03.682 00:14:03.682 Dumping successful admin opcodes: 00:14:03.682 8, 9, 10, 24, 00:14:03.682 Dumping successful io opcodes: 00:14:03.682 0, 00:14:03.682 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1219457, total successful commands: 4779, random_seed: 2343815296 00:14:03.682 NS: 0x200003a1ef00 admin qp, Total commands completed: 153284, total successful commands: 1236, random_seed: 515954944 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 904702 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 904702 ']' 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 904702 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 904702 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 904702' 00:14:03.682 killing process with pid 904702 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 904702 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 904702 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:03.682 00:14:03.682 real 0m33.692s 00:14:03.682 user 0m40.724s 00:14:03.682 sys 0m22.867s 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:03.682 20:09:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:03.682 ************************************ 00:14:03.682 END TEST nvmf_vfio_user_fuzz 00:14:03.682 ************************************ 00:14:03.682 20:09:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:03.682 20:09:00 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:03.682 20:09:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:03.682 20:09:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.682 20:09:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:03.682 ************************************ 00:14:03.682 START TEST nvmf_host_management 00:14:03.682 ************************************ 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:03.682 * Looking for test storage... 00:14:03.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:03.682 20:09:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:10.276 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.276 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:10.277 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:10.277 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:10.277 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.277 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:10.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:14:10.538 00:14:10.538 --- 10.0.0.2 ping statistics --- 00:14:10.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.538 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:14:10.538 00:14:10.538 --- 10.0.0.1 ping statistics --- 00:14:10.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.538 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=915125 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 915125 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 915125 ']' 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.538 20:09:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:10.799 [2024-07-15 20:09:07.991311] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:14:10.799 [2024-07-15 20:09:07.991375] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.799 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.799 [2024-07-15 20:09:08.079700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.799 [2024-07-15 20:09:08.175859] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.799 [2024-07-15 20:09:08.175916] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.799 [2024-07-15 20:09:08.175924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.799 [2024-07-15 20:09:08.175932] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.799 [2024-07-15 20:09:08.175938] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.799 [2024-07-15 20:09:08.176096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.799 [2024-07-15 20:09:08.176247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.799 [2024-07-15 20:09:08.176638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:10.799 [2024-07-15 20:09:08.176641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.372 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.372 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:11.372 20:09:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.372 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:11.372 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:11.633 [2024-07-15 20:09:08.820634] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:11.633 Malloc0 00:14:11.633 [2024-07-15 20:09:08.879855] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=915196 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 915196 /var/tmp/bdevperf.sock 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 915196 ']' 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:11.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:11.633 { 00:14:11.633 "params": { 00:14:11.633 "name": "Nvme$subsystem", 00:14:11.633 "trtype": "$TEST_TRANSPORT", 00:14:11.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:11.633 "adrfam": "ipv4", 00:14:11.633 "trsvcid": "$NVMF_PORT", 00:14:11.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:11.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:11.633 "hdgst": ${hdgst:-false}, 00:14:11.633 "ddgst": ${ddgst:-false} 00:14:11.633 }, 00:14:11.633 "method": "bdev_nvme_attach_controller" 00:14:11.633 } 00:14:11.633 EOF 00:14:11.633 )") 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:11.633 20:09:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:11.634 20:09:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:11.634 20:09:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:11.634 "params": { 00:14:11.634 "name": "Nvme0", 00:14:11.634 "trtype": "tcp", 00:14:11.634 "traddr": "10.0.0.2", 00:14:11.634 "adrfam": "ipv4", 00:14:11.634 "trsvcid": "4420", 00:14:11.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:11.634 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:11.634 "hdgst": false, 00:14:11.634 "ddgst": false 00:14:11.634 }, 00:14:11.634 "method": "bdev_nvme_attach_controller" 00:14:11.634 }' 00:14:11.634 [2024-07-15 20:09:08.978004] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:14:11.634 [2024-07-15 20:09:08.978057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid915196 ] 00:14:11.634 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.634 [2024-07-15 20:09:09.037551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.895 [2024-07-15 20:09:09.102690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.155 Running I/O for 10 seconds... 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.418 20:09:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:12.419 [2024-07-15 20:09:09.822805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822910] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822952] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822958] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822971] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822977] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.822997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823010] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823022] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823028] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823053] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823084] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823098] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823104] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823134] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823164] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823170] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823183] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823207] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823220] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823233] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823239] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.823245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fe40 is same with the state(5) to be set 00:14:12.419 [2024-07-15 20:09:09.825036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.419 [2024-07-15 20:09:09.825074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.419 [2024-07-15 20:09:09.825085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.419 [2024-07-15 20:09:09.825098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.419 [2024-07-15 20:09:09.825107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.419 [2024-07-15 20:09:09.825114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.419 [2024-07-15 20:09:09.825130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.419 [2024-07-15 20:09:09.825138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.419 [2024-07-15 20:09:09.825145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92f3b0 is same with the state(5) to be set 00:14:12.419 20:09:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.419 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:12.419 20:09:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.419 20:09:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:12.419 20:09:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.419 20:09:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:12.419 [2024-07-15 20:09:09.842049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92f3b0 (9): Bad file descriptor 00:14:12.419 [2024-07-15 20:09:09.842147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.419 [2024-07-15 20:09:09.842158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.419 [2024-07-15 20:09:09.842173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.419 [2024-07-15 20:09:09.842181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.420 [2024-07-15 20:09:09.842729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.420 [2024-07-15 20:09:09.842739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.842749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.842758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.842770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.842778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.842787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.842796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.842806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.842814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.842826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.842835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.842846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.842853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.842863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.842873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.842883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.842890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.842901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.842909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.842919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.842927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.842936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.842944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.842957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.842965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.842977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.842986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.842997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.843005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.843015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.843024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.843034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.843041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.843051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.843059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.843068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.843076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.843085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.843093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.843103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.843110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.843120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.843131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.843141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.843149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.843159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.843166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.843176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.843184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.843193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.843203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.843212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.843220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.843230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.843238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.421 [2024-07-15 20:09:09.843247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.421 [2024-07-15 20:09:09.843255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.422 [2024-07-15 20:09:09.843265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.422 [2024-07-15 20:09:09.843272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.422 [2024-07-15 20:09:09.843282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.422 [2024-07-15 20:09:09.843290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.422 [2024-07-15 20:09:09.843299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.422 [2024-07-15 20:09:09.843307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.422 [2024-07-15 20:09:09.843356] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd404f0 was disconnected and freed. reset controller. 00:14:12.422 [2024-07-15 20:09:09.844548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:12.422 task offset: 56960 on job bdev=Nvme0n1 fails 00:14:12.422 00:14:12.422 Latency(us) 00:14:12.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.422 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:12.422 Job: Nvme0n1 ended in about 0.44 seconds with error 00:14:12.422 Verification LBA range: start 0x0 length 0x400 00:14:12.422 Nvme0n1 : 0.44 1017.90 63.62 146.39 0.00 53501.06 1624.75 47622.83 00:14:12.422 =================================================================================================================== 00:14:12.422 Total : 1017.90 63.62 146.39 0.00 53501.06 1624.75 47622.83 00:14:12.422 [2024-07-15 20:09:09.846515] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:12.682 [2024-07-15 20:09:09.855351] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:13.684 20:09:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 915196 00:14:13.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (915196) - No such process 00:14:13.684 20:09:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:13.684 20:09:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:13.684 20:09:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:13.684 20:09:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:13.684 20:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:13.684 20:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:13.684 20:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:13.684 20:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:13.684 { 00:14:13.684 "params": { 00:14:13.684 "name": "Nvme$subsystem", 00:14:13.684 "trtype": "$TEST_TRANSPORT", 00:14:13.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:13.684 "adrfam": "ipv4", 00:14:13.684 "trsvcid": "$NVMF_PORT", 00:14:13.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:13.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:13.684 "hdgst": ${hdgst:-false}, 00:14:13.684 "ddgst": ${ddgst:-false} 00:14:13.684 }, 00:14:13.684 "method": "bdev_nvme_attach_controller" 00:14:13.684 } 00:14:13.684 EOF 00:14:13.684 )") 00:14:13.684 20:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:13.684 20:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:13.684 20:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:13.685 20:09:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:13.685 "params": { 00:14:13.685 "name": "Nvme0", 00:14:13.685 "trtype": "tcp", 00:14:13.685 "traddr": "10.0.0.2", 00:14:13.685 "adrfam": "ipv4", 00:14:13.685 "trsvcid": "4420", 00:14:13.685 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:13.685 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:13.685 "hdgst": false, 00:14:13.685 "ddgst": false 00:14:13.685 }, 00:14:13.685 "method": "bdev_nvme_attach_controller" 00:14:13.685 }' 00:14:13.685 [2024-07-15 20:09:10.896662] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:14:13.685 [2024-07-15 20:09:10.896721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid915991 ] 00:14:13.685 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.685 [2024-07-15 20:09:10.956060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.685 [2024-07-15 20:09:11.020709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.959 Running I/O for 1 seconds... 00:14:14.900 00:14:14.900 Latency(us) 00:14:14.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.900 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:14.900 Verification LBA range: start 0x0 length 0x400 00:14:14.900 Nvme0n1 : 1.02 1135.14 70.95 0.00 0.00 55521.77 12014.93 43253.76 00:14:14.900 =================================================================================================================== 00:14:14.900 Total : 1135.14 70.95 0.00 0.00 55521.77 12014.93 43253.76 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:15.161 rmmod nvme_tcp 00:14:15.161 rmmod nvme_fabrics 00:14:15.161 rmmod nvme_keyring 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 915125 ']' 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 915125 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 915125 ']' 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 915125 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.161 20:09:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 915125 00:14:15.422 20:09:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:15.422 20:09:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:15.422 20:09:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 915125' 00:14:15.422 killing process with pid 915125 00:14:15.422 20:09:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 915125 00:14:15.422 20:09:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 915125 00:14:15.422 [2024-07-15 20:09:12.702222] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:15.422 20:09:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:15.422 20:09:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:15.422 20:09:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:15.423 20:09:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.423 20:09:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:15.423 20:09:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.423 20:09:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.423 20:09:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.973 20:09:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:17.973 20:09:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:17.973 00:14:17.973 real 0m14.135s 00:14:17.973 user 0m22.749s 00:14:17.973 sys 0m6.266s 00:14:17.973 20:09:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:17.973 20:09:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:17.973 ************************************ 00:14:17.973 END TEST nvmf_host_management 00:14:17.973 ************************************ 00:14:17.973 20:09:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:17.973 20:09:14 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:17.973 20:09:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:17.973 20:09:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:17.973 20:09:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:17.973 ************************************ 00:14:17.973 START TEST nvmf_lvol 00:14:17.973 ************************************ 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:17.973 * Looking for test storage... 00:14:17.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:17.973 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:17.974 20:09:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:17.974 20:09:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:17.974 20:09:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:17.974 20:09:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:17.974 20:09:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.974 20:09:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:17.974 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:17.974 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.974 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:17.974 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:17.974 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:17.974 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.974 20:09:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.974 20:09:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.974 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:17.974 20:09:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:17.974 20:09:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:17.974 20:09:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:24.570 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:24.571 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:24.571 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:24.571 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:24.571 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:24.571 20:09:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:24.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:24.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:14:24.837 00:14:24.837 --- 10.0.0.2 ping statistics --- 00:14:24.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.837 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:24.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:24.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:14:24.837 00:14:24.837 --- 10.0.0.1 ping statistics --- 00:14:24.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.837 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=920656 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 920656 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 920656 ']' 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:24.837 20:09:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:25.097 [2024-07-15 20:09:22.271484] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:14:25.097 [2024-07-15 20:09:22.271550] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.097 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.097 [2024-07-15 20:09:22.343104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:25.097 [2024-07-15 20:09:22.417277] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.097 [2024-07-15 20:09:22.417316] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.097 [2024-07-15 20:09:22.417323] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.097 [2024-07-15 20:09:22.417330] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.097 [2024-07-15 20:09:22.417336] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.097 [2024-07-15 20:09:22.417476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.097 [2024-07-15 20:09:22.417601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.097 [2024-07-15 20:09:22.417604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.667 20:09:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:25.667 20:09:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:25.667 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:25.667 20:09:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:25.667 20:09:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:25.667 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.667 20:09:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:25.928 [2024-07-15 20:09:23.237889] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.928 20:09:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:26.189 20:09:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:26.189 20:09:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:26.449 20:09:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:26.449 20:09:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:26.449 20:09:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:26.709 20:09:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4e7596d2-4da1-4c0d-afb1-d9d7695e1411 00:14:26.709 20:09:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4e7596d2-4da1-4c0d-afb1-d9d7695e1411 lvol 20 00:14:26.970 20:09:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=21e5222f-588b-437e-be4a-4ad963fd0e7e 00:14:26.970 20:09:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:26.970 20:09:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 21e5222f-588b-437e-be4a-4ad963fd0e7e 00:14:27.231 20:09:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:27.231 [2024-07-15 20:09:24.628903] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.231 20:09:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:27.491 20:09:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=921074 00:14:27.491 20:09:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:27.491 20:09:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:27.492 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.433 20:09:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 21e5222f-588b-437e-be4a-4ad963fd0e7e MY_SNAPSHOT 00:14:28.693 20:09:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=12873d29-be06-4e4b-8919-47945c3e6466 00:14:28.693 20:09:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 21e5222f-588b-437e-be4a-4ad963fd0e7e 30 00:14:28.954 20:09:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 12873d29-be06-4e4b-8919-47945c3e6466 MY_CLONE 00:14:29.215 20:09:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=13717c68-40dd-44c8-a2f8-2949ead5d013 00:14:29.215 20:09:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 13717c68-40dd-44c8-a2f8-2949ead5d013 00:14:29.476 20:09:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 921074 00:14:39.506 Initializing NVMe Controllers 00:14:39.506 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:39.506 Controller IO queue size 128, less than required. 00:14:39.506 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:39.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:39.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:39.506 Initialization complete. Launching workers. 00:14:39.506 ======================================================== 00:14:39.507 Latency(us) 00:14:39.507 Device Information : IOPS MiB/s Average min max 00:14:39.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12513.39 48.88 10234.25 2173.22 67379.36 00:14:39.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 18071.08 70.59 7083.65 537.67 46702.53 00:14:39.507 ======================================================== 00:14:39.507 Total : 30584.47 119.47 8372.69 537.67 67379.36 00:14:39.507 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 21e5222f-588b-437e-be4a-4ad963fd0e7e 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4e7596d2-4da1-4c0d-afb1-d9d7695e1411 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:39.507 rmmod nvme_tcp 00:14:39.507 rmmod nvme_fabrics 00:14:39.507 rmmod nvme_keyring 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 920656 ']' 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 920656 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 920656 ']' 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 920656 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 920656 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 920656' 00:14:39.507 killing process with pid 920656 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 920656 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 920656 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.507 20:09:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.893 20:09:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:40.893 00:14:40.893 real 0m23.114s 00:14:40.893 user 1m2.223s 00:14:40.893 sys 0m8.250s 00:14:40.893 20:09:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:40.893 20:09:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:40.893 ************************************ 00:14:40.894 END TEST nvmf_lvol 00:14:40.894 ************************************ 00:14:40.894 20:09:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:40.894 20:09:38 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:40.894 20:09:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:40.894 20:09:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.894 20:09:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:40.894 ************************************ 00:14:40.894 START TEST nvmf_lvs_grow 00:14:40.894 ************************************ 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:40.894 * Looking for test storage... 00:14:40.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:40.894 20:09:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:47.496 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:47.496 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:47.496 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:47.496 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:47.496 20:09:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:47.757 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:47.757 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:47.757 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:47.757 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:47.757 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:47.757 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:47.757 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:47.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:14:47.757 00:14:47.757 --- 10.0.0.2 ping statistics --- 00:14:47.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.757 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:14:47.757 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:48.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:14:48.018 00:14:48.018 --- 10.0.0.1 ping statistics --- 00:14:48.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.018 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:14:48.018 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.018 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:48.018 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:48.018 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.018 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:48.018 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:48.018 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.018 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:48.018 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:48.018 20:09:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:48.018 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:48.019 20:09:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:48.019 20:09:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:48.019 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=927379 00:14:48.019 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 927379 00:14:48.019 20:09:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:48.019 20:09:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 927379 ']' 00:14:48.019 20:09:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.019 20:09:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:48.019 20:09:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.019 20:09:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:48.019 20:09:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:48.019 [2024-07-15 20:09:45.293259] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:14:48.019 [2024-07-15 20:09:45.293307] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.019 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.019 [2024-07-15 20:09:45.358667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.019 [2024-07-15 20:09:45.422231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.019 [2024-07-15 20:09:45.422268] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.019 [2024-07-15 20:09:45.422275] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.019 [2024-07-15 20:09:45.422281] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.019 [2024-07-15 20:09:45.422287] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.019 [2024-07-15 20:09:45.422307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:48.963 [2024-07-15 20:09:46.245488] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:48.963 ************************************ 00:14:48.963 START TEST lvs_grow_clean 00:14:48.963 ************************************ 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:48.963 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:49.224 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:49.224 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:49.484 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=10ce6d23-5672-41d7-9938-5a7dffc4604e 00:14:49.484 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10ce6d23-5672-41d7-9938-5a7dffc4604e 00:14:49.484 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:49.484 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:49.484 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:49.484 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 10ce6d23-5672-41d7-9938-5a7dffc4604e lvol 150 00:14:49.745 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d178b327-d261-40fb-9d3f-2aac3844a6bb 00:14:49.745 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:49.745 20:09:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:49.745 [2024-07-15 20:09:47.134214] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:49.745 [2024-07-15 20:09:47.134269] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:49.745 true 00:14:49.745 20:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10ce6d23-5672-41d7-9938-5a7dffc4604e 00:14:49.745 20:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:50.006 20:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:50.006 20:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:50.267 20:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d178b327-d261-40fb-9d3f-2aac3844a6bb 00:14:50.267 20:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:50.528 [2024-07-15 20:09:47.752111] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.528 20:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:50.528 20:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=928062 00:14:50.528 20:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:50.528 20:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:50.528 20:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 928062 /var/tmp/bdevperf.sock 00:14:50.528 20:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 928062 ']' 00:14:50.528 20:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:50.528 20:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.528 20:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:50.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:50.528 20:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.528 20:09:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:50.788 [2024-07-15 20:09:47.968904] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:14:50.788 [2024-07-15 20:09:47.968956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid928062 ] 00:14:50.788 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.788 [2024-07-15 20:09:48.042637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.788 [2024-07-15 20:09:48.106388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.360 20:09:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:51.360 20:09:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:51.360 20:09:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:51.621 Nvme0n1 00:14:51.621 20:09:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:51.881 [ 00:14:51.881 { 00:14:51.881 "name": "Nvme0n1", 00:14:51.881 "aliases": [ 00:14:51.881 "d178b327-d261-40fb-9d3f-2aac3844a6bb" 00:14:51.881 ], 00:14:51.881 "product_name": "NVMe disk", 00:14:51.881 "block_size": 4096, 00:14:51.881 "num_blocks": 38912, 00:14:51.881 "uuid": "d178b327-d261-40fb-9d3f-2aac3844a6bb", 00:14:51.881 "assigned_rate_limits": { 00:14:51.881 "rw_ios_per_sec": 0, 00:14:51.881 "rw_mbytes_per_sec": 0, 00:14:51.881 "r_mbytes_per_sec": 0, 00:14:51.881 "w_mbytes_per_sec": 0 00:14:51.881 }, 00:14:51.881 "claimed": false, 00:14:51.881 "zoned": false, 00:14:51.881 "supported_io_types": { 00:14:51.881 "read": true, 00:14:51.881 "write": true, 00:14:51.881 "unmap": true, 00:14:51.881 "flush": true, 00:14:51.881 "reset": true, 00:14:51.881 "nvme_admin": true, 00:14:51.881 "nvme_io": true, 00:14:51.881 "nvme_io_md": false, 00:14:51.881 "write_zeroes": true, 00:14:51.881 "zcopy": false, 00:14:51.881 "get_zone_info": false, 00:14:51.881 "zone_management": false, 00:14:51.881 "zone_append": false, 00:14:51.881 "compare": true, 00:14:51.881 "compare_and_write": true, 00:14:51.881 "abort": true, 00:14:51.881 "seek_hole": false, 00:14:51.881 "seek_data": false, 00:14:51.881 "copy": true, 00:14:51.881 "nvme_iov_md": false 00:14:51.881 }, 00:14:51.881 "memory_domains": [ 00:14:51.881 { 00:14:51.881 "dma_device_id": "system", 00:14:51.881 "dma_device_type": 1 00:14:51.881 } 00:14:51.881 ], 00:14:51.881 "driver_specific": { 00:14:51.881 "nvme": [ 00:14:51.881 { 00:14:51.881 "trid": { 00:14:51.881 "trtype": "TCP", 00:14:51.881 "adrfam": "IPv4", 00:14:51.881 "traddr": "10.0.0.2", 00:14:51.881 "trsvcid": "4420", 00:14:51.881 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:51.881 }, 00:14:51.881 "ctrlr_data": { 00:14:51.881 "cntlid": 1, 00:14:51.881 "vendor_id": "0x8086", 00:14:51.881 "model_number": "SPDK bdev Controller", 00:14:51.881 "serial_number": "SPDK0", 00:14:51.881 "firmware_revision": "24.09", 00:14:51.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:51.881 "oacs": { 00:14:51.881 "security": 0, 00:14:51.881 "format": 0, 00:14:51.881 "firmware": 0, 00:14:51.881 "ns_manage": 0 00:14:51.881 }, 00:14:51.881 "multi_ctrlr": true, 00:14:51.881 "ana_reporting": false 00:14:51.881 }, 00:14:51.882 "vs": { 00:14:51.882 "nvme_version": "1.3" 00:14:51.882 }, 00:14:51.882 "ns_data": { 00:14:51.882 "id": 1, 00:14:51.882 "can_share": true 00:14:51.882 } 00:14:51.882 } 00:14:51.882 ], 00:14:51.882 "mp_policy": "active_passive" 00:14:51.882 } 00:14:51.882 } 00:14:51.882 ] 00:14:51.882 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=928142 00:14:51.882 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:51.882 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:51.882 Running I/O for 10 seconds... 00:14:52.825 Latency(us) 00:14:52.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.825 Nvme0n1 : 1.00 17548.00 68.55 0.00 0.00 0.00 0.00 0.00 00:14:52.825 =================================================================================================================== 00:14:52.825 Total : 17548.00 68.55 0.00 0.00 0.00 0.00 0.00 00:14:52.825 00:14:53.767 20:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 10ce6d23-5672-41d7-9938-5a7dffc4604e 00:14:54.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.028 Nvme0n1 : 2.00 17642.00 68.91 0.00 0.00 0.00 0.00 0.00 00:14:54.028 =================================================================================================================== 00:14:54.028 Total : 17642.00 68.91 0.00 0.00 0.00 0.00 0.00 00:14:54.028 00:14:54.028 true 00:14:54.028 20:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10ce6d23-5672-41d7-9938-5a7dffc4604e 00:14:54.028 20:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:54.028 20:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:54.028 20:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:54.028 20:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 928142 00:14:54.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.969 Nvme0n1 : 3.00 17673.33 69.04 0.00 0.00 0.00 0.00 0.00 00:14:54.969 =================================================================================================================== 00:14:54.969 Total : 17673.33 69.04 0.00 0.00 0.00 0.00 0.00 00:14:54.969 00:14:55.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.911 Nvme0n1 : 4.00 17705.00 69.16 0.00 0.00 0.00 0.00 0.00 00:14:55.912 =================================================================================================================== 00:14:55.912 Total : 17705.00 69.16 0.00 0.00 0.00 0.00 0.00 00:14:55.912 00:14:56.852 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.852 Nvme0n1 : 5.00 17738.40 69.29 0.00 0.00 0.00 0.00 0.00 00:14:56.852 =================================================================================================================== 00:14:56.852 Total : 17738.40 69.29 0.00 0.00 0.00 0.00 0.00 00:14:56.852 00:14:57.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.794 Nvme0n1 : 6.00 17763.33 69.39 0.00 0.00 0.00 0.00 0.00 00:14:57.794 =================================================================================================================== 00:14:57.794 Total : 17763.33 69.39 0.00 0.00 0.00 0.00 0.00 00:14:57.794 00:14:59.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.176 Nvme0n1 : 7.00 17782.29 69.46 0.00 0.00 0.00 0.00 0.00 00:14:59.176 =================================================================================================================== 00:14:59.176 Total : 17782.29 69.46 0.00 0.00 0.00 0.00 0.00 00:14:59.176 00:15:00.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.148 Nvme0n1 : 8.00 17798.50 69.53 0.00 0.00 0.00 0.00 0.00 00:15:00.148 =================================================================================================================== 00:15:00.148 Total : 17798.50 69.53 0.00 0.00 0.00 0.00 0.00 00:15:00.148 00:15:01.139 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.139 Nvme0n1 : 9.00 17809.33 69.57 0.00 0.00 0.00 0.00 0.00 00:15:01.139 =================================================================================================================== 00:15:01.139 Total : 17809.33 69.57 0.00 0.00 0.00 0.00 0.00 00:15:01.139 00:15:02.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.079 Nvme0n1 : 10.00 17822.00 69.62 0.00 0.00 0.00 0.00 0.00 00:15:02.079 =================================================================================================================== 00:15:02.079 Total : 17822.00 69.62 0.00 0.00 0.00 0.00 0.00 00:15:02.079 00:15:02.079 00:15:02.079 Latency(us) 00:15:02.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.079 Nvme0n1 : 10.01 17821.81 69.62 0.00 0.00 7177.20 4805.97 12342.61 00:15:02.079 =================================================================================================================== 00:15:02.079 Total : 17821.81 69.62 0.00 0.00 7177.20 4805.97 12342.61 00:15:02.079 0 00:15:02.079 20:09:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 928062 00:15:02.079 20:09:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 928062 ']' 00:15:02.079 20:09:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 928062 00:15:02.079 20:09:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:15:02.079 20:09:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:02.079 20:09:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 928062 00:15:02.079 20:09:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:02.079 20:09:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:02.079 20:09:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 928062' 00:15:02.079 killing process with pid 928062 00:15:02.079 20:09:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 928062 00:15:02.079 Received shutdown signal, test time was about 10.000000 seconds 00:15:02.079 00:15:02.079 Latency(us) 00:15:02.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.079 =================================================================================================================== 00:15:02.079 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:02.079 20:09:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 928062 00:15:02.079 20:09:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:02.340 20:09:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:02.340 20:09:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10ce6d23-5672-41d7-9938-5a7dffc4604e 00:15:02.340 20:09:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:02.600 20:09:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:02.601 20:09:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:02.601 20:09:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:02.861 [2024-07-15 20:10:00.065276] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:02.861 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10ce6d23-5672-41d7-9938-5a7dffc4604e 00:15:02.861 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:15:02.861 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10ce6d23-5672-41d7-9938-5a7dffc4604e 00:15:02.861 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.861 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.861 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.861 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.861 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.861 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.861 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.861 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:02.861 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10ce6d23-5672-41d7-9938-5a7dffc4604e 00:15:02.861 request: 00:15:02.861 { 00:15:02.861 "uuid": "10ce6d23-5672-41d7-9938-5a7dffc4604e", 00:15:02.861 "method": "bdev_lvol_get_lvstores", 00:15:02.861 "req_id": 1 00:15:02.861 } 00:15:02.861 Got JSON-RPC error response 00:15:02.861 response: 00:15:02.861 { 00:15:02.861 "code": -19, 00:15:02.861 "message": "No such device" 00:15:02.861 } 00:15:02.861 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:15:02.861 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:02.861 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:02.861 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:02.861 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:03.122 aio_bdev 00:15:03.122 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d178b327-d261-40fb-9d3f-2aac3844a6bb 00:15:03.122 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=d178b327-d261-40fb-9d3f-2aac3844a6bb 00:15:03.122 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:03.122 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:15:03.122 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:03.122 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:03.122 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:03.382 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d178b327-d261-40fb-9d3f-2aac3844a6bb -t 2000 00:15:03.382 [ 00:15:03.382 { 00:15:03.382 "name": "d178b327-d261-40fb-9d3f-2aac3844a6bb", 00:15:03.382 "aliases": [ 00:15:03.382 "lvs/lvol" 00:15:03.382 ], 00:15:03.382 "product_name": "Logical Volume", 00:15:03.382 "block_size": 4096, 00:15:03.382 "num_blocks": 38912, 00:15:03.382 "uuid": "d178b327-d261-40fb-9d3f-2aac3844a6bb", 00:15:03.382 "assigned_rate_limits": { 00:15:03.382 "rw_ios_per_sec": 0, 00:15:03.382 "rw_mbytes_per_sec": 0, 00:15:03.382 "r_mbytes_per_sec": 0, 00:15:03.382 "w_mbytes_per_sec": 0 00:15:03.382 }, 00:15:03.382 "claimed": false, 00:15:03.382 "zoned": false, 00:15:03.382 "supported_io_types": { 00:15:03.382 "read": true, 00:15:03.382 "write": true, 00:15:03.382 "unmap": true, 00:15:03.382 "flush": false, 00:15:03.382 "reset": true, 00:15:03.382 "nvme_admin": false, 00:15:03.382 "nvme_io": false, 00:15:03.382 "nvme_io_md": false, 00:15:03.382 "write_zeroes": true, 00:15:03.382 "zcopy": false, 00:15:03.382 "get_zone_info": false, 00:15:03.382 "zone_management": false, 00:15:03.382 "zone_append": false, 00:15:03.382 "compare": false, 00:15:03.382 "compare_and_write": false, 00:15:03.382 "abort": false, 00:15:03.382 "seek_hole": true, 00:15:03.382 "seek_data": true, 00:15:03.382 "copy": false, 00:15:03.382 "nvme_iov_md": false 00:15:03.382 }, 00:15:03.382 "driver_specific": { 00:15:03.382 "lvol": { 00:15:03.382 "lvol_store_uuid": "10ce6d23-5672-41d7-9938-5a7dffc4604e", 00:15:03.382 "base_bdev": "aio_bdev", 00:15:03.382 "thin_provision": false, 00:15:03.382 "num_allocated_clusters": 38, 00:15:03.382 "snapshot": false, 00:15:03.382 "clone": false, 00:15:03.382 "esnap_clone": false 00:15:03.382 } 00:15:03.382 } 00:15:03.382 } 00:15:03.383 ] 00:15:03.383 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:15:03.383 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10ce6d23-5672-41d7-9938-5a7dffc4604e 00:15:03.383 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:03.643 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:03.643 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10ce6d23-5672-41d7-9938-5a7dffc4604e 00:15:03.643 20:10:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:03.903 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:03.903 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d178b327-d261-40fb-9d3f-2aac3844a6bb 00:15:03.903 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 10ce6d23-5672-41d7-9938-5a7dffc4604e 00:15:04.163 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:04.163 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:04.423 00:15:04.423 real 0m15.296s 00:15:04.423 user 0m14.785s 00:15:04.423 sys 0m1.509s 00:15:04.423 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:04.423 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:04.423 ************************************ 00:15:04.423 END TEST lvs_grow_clean 00:15:04.423 ************************************ 00:15:04.423 20:10:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:04.423 20:10:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:04.423 20:10:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:04.423 20:10:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.423 20:10:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:04.423 ************************************ 00:15:04.423 START TEST lvs_grow_dirty 00:15:04.423 ************************************ 00:15:04.423 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:15:04.423 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:04.423 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:04.423 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:04.423 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:04.423 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:04.423 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:04.423 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:04.423 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:04.423 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:04.684 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:04.684 20:10:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:04.684 20:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2ee9b833-335b-419b-9667-efa7f064c4da 00:15:04.684 20:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee9b833-335b-419b-9667-efa7f064c4da 00:15:04.684 20:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:04.944 20:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:04.944 20:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:04.944 20:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2ee9b833-335b-419b-9667-efa7f064c4da lvol 150 00:15:04.944 20:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8bd19515-4c2d-404d-a3c8-e065ee94f86a 00:15:04.944 20:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:04.944 20:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:05.204 [2024-07-15 20:10:02.482630] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:05.204 [2024-07-15 20:10:02.482683] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:05.204 true 00:15:05.204 20:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee9b833-335b-419b-9667-efa7f064c4da 00:15:05.204 20:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:05.465 20:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:05.465 20:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:05.465 20:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8bd19515-4c2d-404d-a3c8-e065ee94f86a 00:15:05.724 20:10:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:05.724 [2024-07-15 20:10:03.088469] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.724 20:10:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:05.984 20:10:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=931075 00:15:05.984 20:10:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:05.984 20:10:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:05.984 20:10:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 931075 /var/tmp/bdevperf.sock 00:15:05.984 20:10:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 931075 ']' 00:15:05.984 20:10:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:05.984 20:10:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:05.984 20:10:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:05.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:05.984 20:10:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:05.984 20:10:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:05.984 [2024-07-15 20:10:03.317292] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:15:05.984 [2024-07-15 20:10:03.317342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid931075 ] 00:15:05.984 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.984 [2024-07-15 20:10:03.391058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.243 [2024-07-15 20:10:03.444785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.813 20:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:06.813 20:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:06.813 20:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:07.073 Nvme0n1 00:15:07.073 20:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:07.334 [ 00:15:07.334 { 00:15:07.334 "name": "Nvme0n1", 00:15:07.334 "aliases": [ 00:15:07.334 "8bd19515-4c2d-404d-a3c8-e065ee94f86a" 00:15:07.334 ], 00:15:07.334 "product_name": "NVMe disk", 00:15:07.334 "block_size": 4096, 00:15:07.334 "num_blocks": 38912, 00:15:07.334 "uuid": "8bd19515-4c2d-404d-a3c8-e065ee94f86a", 00:15:07.334 "assigned_rate_limits": { 00:15:07.334 "rw_ios_per_sec": 0, 00:15:07.334 "rw_mbytes_per_sec": 0, 00:15:07.334 "r_mbytes_per_sec": 0, 00:15:07.334 "w_mbytes_per_sec": 0 00:15:07.334 }, 00:15:07.334 "claimed": false, 00:15:07.334 "zoned": false, 00:15:07.334 "supported_io_types": { 00:15:07.334 "read": true, 00:15:07.334 "write": true, 00:15:07.334 "unmap": true, 00:15:07.334 "flush": true, 00:15:07.334 "reset": true, 00:15:07.334 "nvme_admin": true, 00:15:07.334 "nvme_io": true, 00:15:07.334 "nvme_io_md": false, 00:15:07.334 "write_zeroes": true, 00:15:07.334 "zcopy": false, 00:15:07.334 "get_zone_info": false, 00:15:07.334 "zone_management": false, 00:15:07.334 "zone_append": false, 00:15:07.334 "compare": true, 00:15:07.334 "compare_and_write": true, 00:15:07.334 "abort": true, 00:15:07.335 "seek_hole": false, 00:15:07.335 "seek_data": false, 00:15:07.335 "copy": true, 00:15:07.335 "nvme_iov_md": false 00:15:07.335 }, 00:15:07.335 "memory_domains": [ 00:15:07.335 { 00:15:07.335 "dma_device_id": "system", 00:15:07.335 "dma_device_type": 1 00:15:07.335 } 00:15:07.335 ], 00:15:07.335 "driver_specific": { 00:15:07.335 "nvme": [ 00:15:07.335 { 00:15:07.335 "trid": { 00:15:07.335 "trtype": "TCP", 00:15:07.335 "adrfam": "IPv4", 00:15:07.335 "traddr": "10.0.0.2", 00:15:07.335 "trsvcid": "4420", 00:15:07.335 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:07.335 }, 00:15:07.335 "ctrlr_data": { 00:15:07.335 "cntlid": 1, 00:15:07.335 "vendor_id": "0x8086", 00:15:07.335 "model_number": "SPDK bdev Controller", 00:15:07.335 "serial_number": "SPDK0", 00:15:07.335 "firmware_revision": "24.09", 00:15:07.335 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:07.335 "oacs": { 00:15:07.335 "security": 0, 00:15:07.335 "format": 0, 00:15:07.335 "firmware": 0, 00:15:07.335 "ns_manage": 0 00:15:07.335 }, 00:15:07.335 "multi_ctrlr": true, 00:15:07.335 "ana_reporting": false 00:15:07.335 }, 00:15:07.335 "vs": { 00:15:07.335 "nvme_version": "1.3" 00:15:07.335 }, 00:15:07.335 "ns_data": { 00:15:07.335 "id": 1, 00:15:07.335 "can_share": true 00:15:07.335 } 00:15:07.335 } 00:15:07.335 ], 00:15:07.335 "mp_policy": "active_passive" 00:15:07.335 } 00:15:07.335 } 00:15:07.335 ] 00:15:07.335 20:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:07.335 20:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=931237 00:15:07.335 20:10:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:07.335 Running I/O for 10 seconds... 00:15:08.278 Latency(us) 00:15:08.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.278 Nvme0n1 : 1.00 18056.00 70.53 0.00 0.00 0.00 0.00 0.00 00:15:08.278 =================================================================================================================== 00:15:08.278 Total : 18056.00 70.53 0.00 0.00 0.00 0.00 0.00 00:15:08.278 00:15:09.221 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2ee9b833-335b-419b-9667-efa7f064c4da 00:15:09.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.483 Nvme0n1 : 2.00 18179.50 71.01 0.00 0.00 0.00 0.00 0.00 00:15:09.483 =================================================================================================================== 00:15:09.483 Total : 18179.50 71.01 0.00 0.00 0.00 0.00 0.00 00:15:09.483 00:15:09.483 true 00:15:09.483 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee9b833-335b-419b-9667-efa7f064c4da 00:15:09.483 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:09.743 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:09.743 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:09.743 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 931237 00:15:10.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.315 Nvme0n1 : 3.00 18221.00 71.18 0.00 0.00 0.00 0.00 0.00 00:15:10.315 =================================================================================================================== 00:15:10.315 Total : 18221.00 71.18 0.00 0.00 0.00 0.00 0.00 00:15:10.315 00:15:11.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:11.700 Nvme0n1 : 4.00 18257.50 71.32 0.00 0.00 0.00 0.00 0.00 00:15:11.700 =================================================================================================================== 00:15:11.700 Total : 18257.50 71.32 0.00 0.00 0.00 0.00 0.00 00:15:11.700 00:15:12.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:12.643 Nvme0n1 : 5.00 18296.80 71.47 0.00 0.00 0.00 0.00 0.00 00:15:12.643 =================================================================================================================== 00:15:12.643 Total : 18296.80 71.47 0.00 0.00 0.00 0.00 0.00 00:15:12.643 00:15:13.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:13.585 Nvme0n1 : 6.00 18308.67 71.52 0.00 0.00 0.00 0.00 0.00 00:15:13.585 =================================================================================================================== 00:15:13.585 Total : 18308.67 71.52 0.00 0.00 0.00 0.00 0.00 00:15:13.585 00:15:14.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:14.526 Nvme0n1 : 7.00 18321.29 71.57 0.00 0.00 0.00 0.00 0.00 00:15:14.526 =================================================================================================================== 00:15:14.526 Total : 18321.29 71.57 0.00 0.00 0.00 0.00 0.00 00:15:14.526 00:15:15.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:15.501 Nvme0n1 : 8.00 18335.12 71.62 0.00 0.00 0.00 0.00 0.00 00:15:15.501 =================================================================================================================== 00:15:15.501 Total : 18335.12 71.62 0.00 0.00 0.00 0.00 0.00 00:15:15.501 00:15:16.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:16.442 Nvme0n1 : 9.00 18345.89 71.66 0.00 0.00 0.00 0.00 0.00 00:15:16.442 =================================================================================================================== 00:15:16.442 Total : 18345.89 71.66 0.00 0.00 0.00 0.00 0.00 00:15:16.442 00:15:17.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:17.384 Nvme0n1 : 10.00 18358.00 71.71 0.00 0.00 0.00 0.00 0.00 00:15:17.384 =================================================================================================================== 00:15:17.384 Total : 18358.00 71.71 0.00 0.00 0.00 0.00 0.00 00:15:17.384 00:15:17.384 00:15:17.384 Latency(us) 00:15:17.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:17.384 Nvme0n1 : 10.01 18356.76 71.71 0.00 0.00 6969.77 2539.52 10813.44 00:15:17.384 =================================================================================================================== 00:15:17.384 Total : 18356.76 71.71 0.00 0.00 6969.77 2539.52 10813.44 00:15:17.384 0 00:15:17.384 20:10:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 931075 00:15:17.384 20:10:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 931075 ']' 00:15:17.384 20:10:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 931075 00:15:17.384 20:10:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:17.384 20:10:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:17.384 20:10:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 931075 00:15:17.384 20:10:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:17.384 20:10:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:17.384 20:10:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 931075' 00:15:17.384 killing process with pid 931075 00:15:17.384 20:10:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 931075 00:15:17.384 Received shutdown signal, test time was about 10.000000 seconds 00:15:17.384 00:15:17.384 Latency(us) 00:15:17.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.384 =================================================================================================================== 00:15:17.384 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:17.384 20:10:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 931075 00:15:17.645 20:10:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:17.645 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:17.906 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee9b833-335b-419b-9667-efa7f064c4da 00:15:17.906 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:18.166 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:18.166 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:18.166 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 927379 00:15:18.166 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 927379 00:15:18.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 927379 Killed "${NVMF_APP[@]}" "$@" 00:15:18.166 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:18.166 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:18.166 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:18.167 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:18.167 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:18.167 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=933498 00:15:18.167 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 933498 00:15:18.167 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 933498 ']' 00:15:18.167 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:18.167 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.167 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:18.167 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.167 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:18.167 20:10:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:18.167 [2024-07-15 20:10:15.520519] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:15:18.167 [2024-07-15 20:10:15.520577] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.167 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.167 [2024-07-15 20:10:15.587587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.427 [2024-07-15 20:10:15.650920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.427 [2024-07-15 20:10:15.650956] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.427 [2024-07-15 20:10:15.650963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.427 [2024-07-15 20:10:15.650969] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.427 [2024-07-15 20:10:15.650975] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.427 [2024-07-15 20:10:15.650996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.000 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.000 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:19.000 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:19.000 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:19.000 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:19.000 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.000 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:19.261 [2024-07-15 20:10:16.459938] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:19.261 [2024-07-15 20:10:16.460030] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:19.261 [2024-07-15 20:10:16.460058] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:19.261 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:19.261 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8bd19515-4c2d-404d-a3c8-e065ee94f86a 00:15:19.261 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8bd19515-4c2d-404d-a3c8-e065ee94f86a 00:15:19.261 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:19.261 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:19.261 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:19.261 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:19.261 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:19.261 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8bd19515-4c2d-404d-a3c8-e065ee94f86a -t 2000 00:15:19.522 [ 00:15:19.522 { 00:15:19.522 "name": "8bd19515-4c2d-404d-a3c8-e065ee94f86a", 00:15:19.522 "aliases": [ 00:15:19.522 "lvs/lvol" 00:15:19.522 ], 00:15:19.522 "product_name": "Logical Volume", 00:15:19.522 "block_size": 4096, 00:15:19.522 "num_blocks": 38912, 00:15:19.522 "uuid": "8bd19515-4c2d-404d-a3c8-e065ee94f86a", 00:15:19.522 "assigned_rate_limits": { 00:15:19.522 "rw_ios_per_sec": 0, 00:15:19.522 "rw_mbytes_per_sec": 0, 00:15:19.522 "r_mbytes_per_sec": 0, 00:15:19.522 "w_mbytes_per_sec": 0 00:15:19.522 }, 00:15:19.522 "claimed": false, 00:15:19.522 "zoned": false, 00:15:19.522 "supported_io_types": { 00:15:19.522 "read": true, 00:15:19.522 "write": true, 00:15:19.522 "unmap": true, 00:15:19.522 "flush": false, 00:15:19.522 "reset": true, 00:15:19.522 "nvme_admin": false, 00:15:19.522 "nvme_io": false, 00:15:19.522 "nvme_io_md": false, 00:15:19.522 "write_zeroes": true, 00:15:19.522 "zcopy": false, 00:15:19.522 "get_zone_info": false, 00:15:19.522 "zone_management": false, 00:15:19.522 "zone_append": false, 00:15:19.522 "compare": false, 00:15:19.522 "compare_and_write": false, 00:15:19.522 "abort": false, 00:15:19.522 "seek_hole": true, 00:15:19.522 "seek_data": true, 00:15:19.522 "copy": false, 00:15:19.522 "nvme_iov_md": false 00:15:19.522 }, 00:15:19.522 "driver_specific": { 00:15:19.522 "lvol": { 00:15:19.522 "lvol_store_uuid": "2ee9b833-335b-419b-9667-efa7f064c4da", 00:15:19.522 "base_bdev": "aio_bdev", 00:15:19.522 "thin_provision": false, 00:15:19.522 "num_allocated_clusters": 38, 00:15:19.522 "snapshot": false, 00:15:19.522 "clone": false, 00:15:19.522 "esnap_clone": false 00:15:19.522 } 00:15:19.522 } 00:15:19.522 } 00:15:19.522 ] 00:15:19.522 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:19.522 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee9b833-335b-419b-9667-efa7f064c4da 00:15:19.522 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:19.783 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:19.783 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee9b833-335b-419b-9667-efa7f064c4da 00:15:19.783 20:10:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:19.783 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:19.783 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:20.045 [2024-07-15 20:10:17.247888] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:20.045 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee9b833-335b-419b-9667-efa7f064c4da 00:15:20.045 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:20.045 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee9b833-335b-419b-9667-efa7f064c4da 00:15:20.045 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.045 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:20.045 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.045 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:20.045 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.045 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:20.045 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.045 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:20.045 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee9b833-335b-419b-9667-efa7f064c4da 00:15:20.045 request: 00:15:20.045 { 00:15:20.045 "uuid": "2ee9b833-335b-419b-9667-efa7f064c4da", 00:15:20.045 "method": "bdev_lvol_get_lvstores", 00:15:20.045 "req_id": 1 00:15:20.045 } 00:15:20.045 Got JSON-RPC error response 00:15:20.045 response: 00:15:20.045 { 00:15:20.045 "code": -19, 00:15:20.045 "message": "No such device" 00:15:20.045 } 00:15:20.045 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:20.045 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:20.045 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:20.045 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:20.045 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:20.305 aio_bdev 00:15:20.305 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8bd19515-4c2d-404d-a3c8-e065ee94f86a 00:15:20.305 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8bd19515-4c2d-404d-a3c8-e065ee94f86a 00:15:20.305 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:20.305 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:20.305 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:20.305 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:20.305 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:20.305 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8bd19515-4c2d-404d-a3c8-e065ee94f86a -t 2000 00:15:20.566 [ 00:15:20.566 { 00:15:20.566 "name": "8bd19515-4c2d-404d-a3c8-e065ee94f86a", 00:15:20.566 "aliases": [ 00:15:20.566 "lvs/lvol" 00:15:20.566 ], 00:15:20.566 "product_name": "Logical Volume", 00:15:20.566 "block_size": 4096, 00:15:20.566 "num_blocks": 38912, 00:15:20.566 "uuid": "8bd19515-4c2d-404d-a3c8-e065ee94f86a", 00:15:20.566 "assigned_rate_limits": { 00:15:20.566 "rw_ios_per_sec": 0, 00:15:20.566 "rw_mbytes_per_sec": 0, 00:15:20.566 "r_mbytes_per_sec": 0, 00:15:20.566 "w_mbytes_per_sec": 0 00:15:20.566 }, 00:15:20.566 "claimed": false, 00:15:20.566 "zoned": false, 00:15:20.566 "supported_io_types": { 00:15:20.566 "read": true, 00:15:20.566 "write": true, 00:15:20.566 "unmap": true, 00:15:20.566 "flush": false, 00:15:20.566 "reset": true, 00:15:20.566 "nvme_admin": false, 00:15:20.566 "nvme_io": false, 00:15:20.566 "nvme_io_md": false, 00:15:20.566 "write_zeroes": true, 00:15:20.566 "zcopy": false, 00:15:20.566 "get_zone_info": false, 00:15:20.566 "zone_management": false, 00:15:20.566 "zone_append": false, 00:15:20.566 "compare": false, 00:15:20.566 "compare_and_write": false, 00:15:20.566 "abort": false, 00:15:20.566 "seek_hole": true, 00:15:20.566 "seek_data": true, 00:15:20.566 "copy": false, 00:15:20.566 "nvme_iov_md": false 00:15:20.566 }, 00:15:20.566 "driver_specific": { 00:15:20.566 "lvol": { 00:15:20.566 "lvol_store_uuid": "2ee9b833-335b-419b-9667-efa7f064c4da", 00:15:20.566 "base_bdev": "aio_bdev", 00:15:20.566 "thin_provision": false, 00:15:20.566 "num_allocated_clusters": 38, 00:15:20.566 "snapshot": false, 00:15:20.566 "clone": false, 00:15:20.566 "esnap_clone": false 00:15:20.566 } 00:15:20.566 } 00:15:20.566 } 00:15:20.566 ] 00:15:20.567 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:20.567 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee9b833-335b-419b-9667-efa7f064c4da 00:15:20.567 20:10:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:20.827 20:10:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:20.827 20:10:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ee9b833-335b-419b-9667-efa7f064c4da 00:15:20.827 20:10:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:20.827 20:10:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:20.827 20:10:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8bd19515-4c2d-404d-a3c8-e065ee94f86a 00:15:21.096 20:10:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2ee9b833-335b-419b-9667-efa7f064c4da 00:15:21.356 20:10:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:21.356 20:10:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:21.356 00:15:21.356 real 0m17.065s 00:15:21.356 user 0m44.412s 00:15:21.356 sys 0m2.991s 00:15:21.356 20:10:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.356 20:10:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:21.356 ************************************ 00:15:21.356 END TEST lvs_grow_dirty 00:15:21.356 ************************************ 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:21.615 nvmf_trace.0 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:21.615 rmmod nvme_tcp 00:15:21.615 rmmod nvme_fabrics 00:15:21.615 rmmod nvme_keyring 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 933498 ']' 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 933498 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 933498 ']' 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 933498 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 933498 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 933498' 00:15:21.615 killing process with pid 933498 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 933498 00:15:21.615 20:10:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 933498 00:15:21.875 20:10:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:21.875 20:10:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:21.875 20:10:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:21.875 20:10:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:21.875 20:10:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:21.875 20:10:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.875 20:10:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.875 20:10:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.782 20:10:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:23.782 00:15:23.782 real 0m43.131s 00:15:23.782 user 1m5.100s 00:15:23.782 sys 0m10.206s 00:15:23.782 20:10:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:23.782 20:10:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:23.782 ************************************ 00:15:23.782 END TEST nvmf_lvs_grow 00:15:23.782 ************************************ 00:15:23.782 20:10:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:23.782 20:10:21 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:23.782 20:10:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:23.782 20:10:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:23.782 20:10:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:24.043 ************************************ 00:15:24.043 START TEST nvmf_bdev_io_wait 00:15:24.043 ************************************ 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:24.043 * Looking for test storage... 00:15:24.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:24.043 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:30.644 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:30.644 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:30.644 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:30.644 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:30.644 20:10:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:30.644 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:30.644 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:30.644 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:30.644 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.644 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.644 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:30.644 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:30.644 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:30.644 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:30.644 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:30.644 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:30.644 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.644 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:30.644 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:30.644 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:30.644 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:30.905 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:30.905 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:30.905 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:30.905 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:30.905 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:30.905 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:30.905 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:30.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:15:30.905 00:15:30.905 --- 10.0.0.2 ping statistics --- 00:15:30.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.905 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:15:30.905 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:30.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.412 ms 00:15:30.905 00:15:30.905 --- 10.0.0.1 ping statistics --- 00:15:30.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.905 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:15:30.905 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.905 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:30.905 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:30.905 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.905 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:30.906 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:30.906 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.906 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:30.906 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:31.166 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:31.166 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:31.166 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:31.166 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:31.166 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=938262 00:15:31.166 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 938262 00:15:31.166 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:31.166 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 938262 ']' 00:15:31.166 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.166 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.166 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.166 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.166 20:10:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:31.166 [2024-07-15 20:10:28.400154] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:15:31.166 [2024-07-15 20:10:28.400207] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.166 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.166 [2024-07-15 20:10:28.466622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:31.166 [2024-07-15 20:10:28.535150] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.166 [2024-07-15 20:10:28.535185] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.166 [2024-07-15 20:10:28.535193] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.166 [2024-07-15 20:10:28.535199] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.166 [2024-07-15 20:10:28.535205] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.166 [2024-07-15 20:10:28.535381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.166 [2024-07-15 20:10:28.535494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.166 [2024-07-15 20:10:28.535648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.166 [2024-07-15 20:10:28.535649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:31.750 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.750 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:31.750 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:31.750 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:31.750 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:32.012 [2024-07-15 20:10:29.275810] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:32.012 Malloc0 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:32.012 [2024-07-15 20:10:29.343378] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=938612 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=938614 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:32.012 { 00:15:32.012 "params": { 00:15:32.012 "name": "Nvme$subsystem", 00:15:32.012 "trtype": "$TEST_TRANSPORT", 00:15:32.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:32.012 "adrfam": "ipv4", 00:15:32.012 "trsvcid": "$NVMF_PORT", 00:15:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:32.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:32.012 "hdgst": ${hdgst:-false}, 00:15:32.012 "ddgst": ${ddgst:-false} 00:15:32.012 }, 00:15:32.012 "method": "bdev_nvme_attach_controller" 00:15:32.012 } 00:15:32.012 EOF 00:15:32.012 )") 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=938616 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=938619 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:32.012 { 00:15:32.012 "params": { 00:15:32.012 "name": "Nvme$subsystem", 00:15:32.012 "trtype": "$TEST_TRANSPORT", 00:15:32.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:32.012 "adrfam": "ipv4", 00:15:32.012 "trsvcid": "$NVMF_PORT", 00:15:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:32.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:32.012 "hdgst": ${hdgst:-false}, 00:15:32.012 "ddgst": ${ddgst:-false} 00:15:32.012 }, 00:15:32.012 "method": "bdev_nvme_attach_controller" 00:15:32.012 } 00:15:32.012 EOF 00:15:32.012 )") 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:32.012 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:32.012 { 00:15:32.012 "params": { 00:15:32.012 "name": "Nvme$subsystem", 00:15:32.012 "trtype": "$TEST_TRANSPORT", 00:15:32.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:32.012 "adrfam": "ipv4", 00:15:32.012 "trsvcid": "$NVMF_PORT", 00:15:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:32.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:32.012 "hdgst": ${hdgst:-false}, 00:15:32.012 "ddgst": ${ddgst:-false} 00:15:32.012 }, 00:15:32.013 "method": "bdev_nvme_attach_controller" 00:15:32.013 } 00:15:32.013 EOF 00:15:32.013 )") 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:32.013 { 00:15:32.013 "params": { 00:15:32.013 "name": "Nvme$subsystem", 00:15:32.013 "trtype": "$TEST_TRANSPORT", 00:15:32.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:32.013 "adrfam": "ipv4", 00:15:32.013 "trsvcid": "$NVMF_PORT", 00:15:32.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:32.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:32.013 "hdgst": ${hdgst:-false}, 00:15:32.013 "ddgst": ${ddgst:-false} 00:15:32.013 }, 00:15:32.013 "method": "bdev_nvme_attach_controller" 00:15:32.013 } 00:15:32.013 EOF 00:15:32.013 )") 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 938612 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:32.013 "params": { 00:15:32.013 "name": "Nvme1", 00:15:32.013 "trtype": "tcp", 00:15:32.013 "traddr": "10.0.0.2", 00:15:32.013 "adrfam": "ipv4", 00:15:32.013 "trsvcid": "4420", 00:15:32.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:32.013 "hdgst": false, 00:15:32.013 "ddgst": false 00:15:32.013 }, 00:15:32.013 "method": "bdev_nvme_attach_controller" 00:15:32.013 }' 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:32.013 "params": { 00:15:32.013 "name": "Nvme1", 00:15:32.013 "trtype": "tcp", 00:15:32.013 "traddr": "10.0.0.2", 00:15:32.013 "adrfam": "ipv4", 00:15:32.013 "trsvcid": "4420", 00:15:32.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:32.013 "hdgst": false, 00:15:32.013 "ddgst": false 00:15:32.013 }, 00:15:32.013 "method": "bdev_nvme_attach_controller" 00:15:32.013 }' 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:32.013 "params": { 00:15:32.013 "name": "Nvme1", 00:15:32.013 "trtype": "tcp", 00:15:32.013 "traddr": "10.0.0.2", 00:15:32.013 "adrfam": "ipv4", 00:15:32.013 "trsvcid": "4420", 00:15:32.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:32.013 "hdgst": false, 00:15:32.013 "ddgst": false 00:15:32.013 }, 00:15:32.013 "method": "bdev_nvme_attach_controller" 00:15:32.013 }' 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:32.013 20:10:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:32.013 "params": { 00:15:32.013 "name": "Nvme1", 00:15:32.013 "trtype": "tcp", 00:15:32.013 "traddr": "10.0.0.2", 00:15:32.013 "adrfam": "ipv4", 00:15:32.013 "trsvcid": "4420", 00:15:32.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:32.013 "hdgst": false, 00:15:32.013 "ddgst": false 00:15:32.013 }, 00:15:32.013 "method": "bdev_nvme_attach_controller" 00:15:32.013 }' 00:15:32.013 [2024-07-15 20:10:29.398334] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:15:32.013 [2024-07-15 20:10:29.398381] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:32.013 [2024-07-15 20:10:29.398430] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:15:32.013 [2024-07-15 20:10:29.398481] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:32.013 [2024-07-15 20:10:29.398487] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:15:32.013 [2024-07-15 20:10:29.398531] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:32.013 [2024-07-15 20:10:29.399071] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:15:32.013 [2024-07-15 20:10:29.399115] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:32.274 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.274 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.274 [2024-07-15 20:10:29.547129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.274 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.274 [2024-07-15 20:10:29.595302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.274 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.274 [2024-07-15 20:10:29.598804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:32.274 [2024-07-15 20:10:29.640711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.274 [2024-07-15 20:10:29.645924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:32.274 [2024-07-15 20:10:29.690736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:32.274 [2024-07-15 20:10:29.690852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.535 [2024-07-15 20:10:29.740303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:32.535 Running I/O for 1 seconds... 00:15:32.535 Running I/O for 1 seconds... 00:15:32.535 Running I/O for 1 seconds... 00:15:32.795 Running I/O for 1 seconds... 00:15:33.737 00:15:33.737 Latency(us) 00:15:33.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.737 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:33.737 Nvme1n1 : 1.00 14624.79 57.13 0.00 0.00 8725.46 4969.81 18896.21 00:15:33.737 =================================================================================================================== 00:15:33.737 Total : 14624.79 57.13 0.00 0.00 8725.46 4969.81 18896.21 00:15:33.737 00:15:33.737 Latency(us) 00:15:33.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.737 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:33.737 Nvme1n1 : 1.01 7320.03 28.59 0.00 0.00 17350.61 8465.07 29709.65 00:15:33.737 =================================================================================================================== 00:15:33.737 Total : 7320.03 28.59 0.00 0.00 17350.61 8465.07 29709.65 00:15:33.737 00:15:33.737 Latency(us) 00:15:33.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.737 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:33.737 Nvme1n1 : 1.00 187903.55 734.00 0.00 0.00 678.05 274.77 1140.05 00:15:33.737 =================================================================================================================== 00:15:33.737 Total : 187903.55 734.00 0.00 0.00 678.05 274.77 1140.05 00:15:33.737 20:10:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 938614 00:15:33.737 00:15:33.737 Latency(us) 00:15:33.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.737 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:33.737 Nvme1n1 : 1.01 7765.44 30.33 0.00 0.00 16422.14 6007.47 43253.76 00:15:33.737 =================================================================================================================== 00:15:33.737 Total : 7765.44 30.33 0.00 0.00 16422.14 6007.47 43253.76 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 938616 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 938619 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:33.998 rmmod nvme_tcp 00:15:33.998 rmmod nvme_fabrics 00:15:33.998 rmmod nvme_keyring 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 938262 ']' 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 938262 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 938262 ']' 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 938262 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 938262 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 938262' 00:15:33.998 killing process with pid 938262 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 938262 00:15:33.998 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 938262 00:15:34.280 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.281 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:34.281 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:34.281 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.281 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.281 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.281 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.281 20:10:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.191 20:10:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:36.191 00:15:36.191 real 0m12.252s 00:15:36.191 user 0m19.010s 00:15:36.191 sys 0m6.498s 00:15:36.191 20:10:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:36.191 20:10:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:36.191 ************************************ 00:15:36.191 END TEST nvmf_bdev_io_wait 00:15:36.191 ************************************ 00:15:36.191 20:10:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:36.191 20:10:33 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:36.191 20:10:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:36.191 20:10:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:36.191 20:10:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:36.191 ************************************ 00:15:36.191 START TEST nvmf_queue_depth 00:15:36.191 ************************************ 00:15:36.191 20:10:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:36.452 * Looking for test storage... 00:15:36.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:36.452 20:10:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:43.039 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:43.039 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:43.039 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:43.040 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:43.040 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:43.040 20:10:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:43.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:15:43.040 00:15:43.040 --- 10.0.0.2 ping statistics --- 00:15:43.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.040 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:43.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:15:43.040 00:15:43.040 --- 10.0.0.1 ping statistics --- 00:15:43.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.040 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=942973 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 942973 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 942973 ']' 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:43.040 20:10:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:43.040 [2024-07-15 20:10:40.280765] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:15:43.040 [2024-07-15 20:10:40.280834] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.040 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.040 [2024-07-15 20:10:40.370142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.040 [2024-07-15 20:10:40.462137] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.040 [2024-07-15 20:10:40.462198] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.041 [2024-07-15 20:10:40.462206] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.041 [2024-07-15 20:10:40.462213] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.041 [2024-07-15 20:10:40.462219] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.041 [2024-07-15 20:10:40.462247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:43.985 [2024-07-15 20:10:41.106957] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:43.985 Malloc0 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:43.985 [2024-07-15 20:10:41.163321] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=943085 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 943085 /var/tmp/bdevperf.sock 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 943085 ']' 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:43.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:43.985 20:10:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:43.985 [2024-07-15 20:10:41.219194] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:15:43.985 [2024-07-15 20:10:41.219254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid943085 ] 00:15:43.985 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.985 [2024-07-15 20:10:41.282244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.985 [2024-07-15 20:10:41.356737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.557 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.557 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:44.557 20:10:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:44.557 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.557 20:10:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:44.818 NVMe0n1 00:15:44.818 20:10:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.818 20:10:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:44.818 Running I/O for 10 seconds... 00:15:54.845 00:15:54.845 Latency(us) 00:15:54.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.845 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:54.845 Verification LBA range: start 0x0 length 0x4000 00:15:54.845 NVMe0n1 : 10.05 11655.17 45.53 0.00 0.00 87515.79 14199.47 61603.84 00:15:54.845 =================================================================================================================== 00:15:54.845 Total : 11655.17 45.53 0.00 0.00 87515.79 14199.47 61603.84 00:15:54.845 0 00:15:55.106 20:10:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 943085 00:15:55.106 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 943085 ']' 00:15:55.106 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 943085 00:15:55.106 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:55.106 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:55.106 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 943085 00:15:55.106 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:55.106 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:55.106 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 943085' 00:15:55.106 killing process with pid 943085 00:15:55.106 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 943085 00:15:55.106 Received shutdown signal, test time was about 10.000000 seconds 00:15:55.106 00:15:55.106 Latency(us) 00:15:55.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.106 =================================================================================================================== 00:15:55.106 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:55.106 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 943085 00:15:55.107 20:10:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:55.107 20:10:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:55.107 20:10:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:55.107 20:10:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:55.107 20:10:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:55.107 20:10:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:55.107 20:10:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:55.107 20:10:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:55.107 rmmod nvme_tcp 00:15:55.107 rmmod nvme_fabrics 00:15:55.107 rmmod nvme_keyring 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 942973 ']' 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 942973 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 942973 ']' 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 942973 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 942973 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 942973' 00:15:55.368 killing process with pid 942973 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 942973 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 942973 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.368 20:10:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.918 20:10:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:57.918 00:15:57.918 real 0m21.225s 00:15:57.918 user 0m25.079s 00:15:57.918 sys 0m6.142s 00:15:57.918 20:10:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:57.918 20:10:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:57.918 ************************************ 00:15:57.918 END TEST nvmf_queue_depth 00:15:57.918 ************************************ 00:15:57.918 20:10:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:57.918 20:10:54 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:57.918 20:10:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:57.918 20:10:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.918 20:10:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:57.918 ************************************ 00:15:57.918 START TEST nvmf_target_multipath 00:15:57.918 ************************************ 00:15:57.918 20:10:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:57.918 * Looking for test storage... 00:15:57.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:57.918 20:10:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:57.918 20:10:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:57.918 20:10:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.918 20:10:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.918 20:10:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.918 20:10:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.918 20:10:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.918 20:10:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.918 20:10:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.918 20:10:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.918 20:10:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.918 20:10:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.918 20:10:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:57.918 20:10:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:57.918 20:10:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.918 20:10:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.919 20:10:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:57.919 20:10:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.919 20:10:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:57.919 20:10:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:04.517 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:04.517 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:04.517 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:04.517 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:04.517 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:04.778 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:04.778 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:04.778 20:11:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:04.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:16:04.778 00:16:04.778 --- 10.0.0.2 ping statistics --- 00:16:04.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.778 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:04.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:16:04.778 00:16:04.778 --- 10.0.0.1 ping statistics --- 00:16:04.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.778 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:04.778 only one NIC for nvmf test 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:04.778 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:04.778 rmmod nvme_tcp 00:16:04.778 rmmod nvme_fabrics 00:16:05.057 rmmod nvme_keyring 00:16:05.057 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:05.057 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:05.057 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:05.057 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:05.057 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:05.057 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:05.057 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:05.057 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:05.057 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:05.057 20:11:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.057 20:11:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.057 20:11:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.015 20:11:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:07.015 20:11:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:07.015 20:11:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:07.015 20:11:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:07.015 20:11:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:07.015 20:11:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:07.015 20:11:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:07.015 20:11:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:07.015 20:11:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:07.015 20:11:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:07.015 20:11:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:07.015 20:11:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:07.015 20:11:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:07.015 20:11:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:07.015 20:11:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:07.015 20:11:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:07.015 20:11:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:07.015 20:11:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:07.016 20:11:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.016 20:11:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.016 20:11:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.016 20:11:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:07.016 00:16:07.016 real 0m9.472s 00:16:07.016 user 0m2.041s 00:16:07.016 sys 0m5.317s 00:16:07.016 20:11:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:07.016 20:11:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:07.016 ************************************ 00:16:07.016 END TEST nvmf_target_multipath 00:16:07.016 ************************************ 00:16:07.016 20:11:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:07.016 20:11:04 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:07.016 20:11:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:07.016 20:11:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.016 20:11:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:07.016 ************************************ 00:16:07.016 START TEST nvmf_zcopy 00:16:07.016 ************************************ 00:16:07.016 20:11:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:07.277 * Looking for test storage... 00:16:07.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:07.277 20:11:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:13.867 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:13.867 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:13.867 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:13.867 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:13.867 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:13.868 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:13.868 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:13.868 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:13.868 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:13.868 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:13.868 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:13.868 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:13.868 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:13.868 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:13.868 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.127 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:14.127 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:14.127 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:14.127 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:14.127 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:14.127 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:14.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:16:14.388 00:16:14.388 --- 10.0.0.2 ping statistics --- 00:16:14.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.388 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:14.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.482 ms 00:16:14.388 00:16:14.388 --- 10.0.0.1 ping statistics --- 00:16:14.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.388 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=953650 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 953650 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 953650 ']' 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.388 20:11:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:14.388 [2024-07-15 20:11:11.711066] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:16:14.388 [2024-07-15 20:11:11.711163] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.388 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.388 [2024-07-15 20:11:11.800776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.649 [2024-07-15 20:11:11.891734] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.649 [2024-07-15 20:11:11.891791] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.649 [2024-07-15 20:11:11.891799] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.649 [2024-07-15 20:11:11.891806] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.649 [2024-07-15 20:11:11.891812] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.649 [2024-07-15 20:11:11.891848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.221 [2024-07-15 20:11:12.547045] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.221 [2024-07-15 20:11:12.563290] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:15.221 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.222 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.222 malloc0 00:16:15.222 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.222 20:11:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:15.222 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.222 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.222 20:11:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.222 20:11:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:15.222 20:11:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:15.222 20:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:15.222 20:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:15.222 20:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:15.222 20:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:15.222 { 00:16:15.222 "params": { 00:16:15.222 "name": "Nvme$subsystem", 00:16:15.222 "trtype": "$TEST_TRANSPORT", 00:16:15.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:15.222 "adrfam": "ipv4", 00:16:15.222 "trsvcid": "$NVMF_PORT", 00:16:15.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:15.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:15.222 "hdgst": ${hdgst:-false}, 00:16:15.222 "ddgst": ${ddgst:-false} 00:16:15.222 }, 00:16:15.222 "method": "bdev_nvme_attach_controller" 00:16:15.222 } 00:16:15.222 EOF 00:16:15.222 )") 00:16:15.222 20:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:15.222 20:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:15.222 20:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:15.222 20:11:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:15.222 "params": { 00:16:15.222 "name": "Nvme1", 00:16:15.222 "trtype": "tcp", 00:16:15.222 "traddr": "10.0.0.2", 00:16:15.222 "adrfam": "ipv4", 00:16:15.222 "trsvcid": "4420", 00:16:15.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:15.222 "hdgst": false, 00:16:15.222 "ddgst": false 00:16:15.222 }, 00:16:15.222 "method": "bdev_nvme_attach_controller" 00:16:15.222 }' 00:16:15.222 [2024-07-15 20:11:12.651241] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:16:15.222 [2024-07-15 20:11:12.651305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953732 ] 00:16:15.483 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.483 [2024-07-15 20:11:12.714653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.483 [2024-07-15 20:11:12.789441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.743 Running I/O for 10 seconds... 00:16:25.743 00:16:25.743 Latency(us) 00:16:25.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.743 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:25.743 Verification LBA range: start 0x0 length 0x1000 00:16:25.743 Nvme1n1 : 10.01 9361.24 73.13 0.00 0.00 13622.49 2321.07 34078.72 00:16:25.743 =================================================================================================================== 00:16:25.743 Total : 9361.24 73.13 0.00 0.00 13622.49 2321.07 34078.72 00:16:25.743 20:11:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=955834 00:16:25.743 20:11:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:25.743 20:11:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:25.743 20:11:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:25.743 20:11:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:25.743 20:11:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:25.743 20:11:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:25.743 20:11:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:25.743 20:11:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:25.743 { 00:16:25.743 "params": { 00:16:25.743 "name": "Nvme$subsystem", 00:16:25.743 "trtype": "$TEST_TRANSPORT", 00:16:25.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.743 "adrfam": "ipv4", 00:16:25.743 "trsvcid": "$NVMF_PORT", 00:16:25.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.743 "hdgst": ${hdgst:-false}, 00:16:25.743 "ddgst": ${ddgst:-false} 00:16:25.743 }, 00:16:25.743 "method": "bdev_nvme_attach_controller" 00:16:25.743 } 00:16:25.743 EOF 00:16:25.743 )") 00:16:25.743 [2024-07-15 20:11:23.149297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.743 [2024-07-15 20:11:23.149327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.743 20:11:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:25.743 20:11:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:25.743 [2024-07-15 20:11:23.157279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.743 [2024-07-15 20:11:23.157288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.743 20:11:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:25.743 20:11:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:25.743 "params": { 00:16:25.743 "name": "Nvme1", 00:16:25.743 "trtype": "tcp", 00:16:25.743 "traddr": "10.0.0.2", 00:16:25.743 "adrfam": "ipv4", 00:16:25.743 "trsvcid": "4420", 00:16:25.743 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:25.743 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:25.743 "hdgst": false, 00:16:25.743 "ddgst": false 00:16:25.743 }, 00:16:25.743 "method": "bdev_nvme_attach_controller" 00:16:25.743 }' 00:16:25.743 [2024-07-15 20:11:23.165297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.743 [2024-07-15 20:11:23.165305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.743 [2024-07-15 20:11:23.173318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.743 [2024-07-15 20:11:23.173325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.005 [2024-07-15 20:11:23.181339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.005 [2024-07-15 20:11:23.181346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.005 [2024-07-15 20:11:23.189047] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:16:26.005 [2024-07-15 20:11:23.189094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955834 ] 00:16:26.005 [2024-07-15 20:11:23.189360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.005 [2024-07-15 20:11:23.189367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.005 [2024-07-15 20:11:23.197380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.005 [2024-07-15 20:11:23.197387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.005 [2024-07-15 20:11:23.205402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.005 [2024-07-15 20:11:23.205409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.005 [2024-07-15 20:11:23.213422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.005 [2024-07-15 20:11:23.213429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.005 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.005 [2024-07-15 20:11:23.221444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.005 [2024-07-15 20:11:23.221452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.005 [2024-07-15 20:11:23.229464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.005 [2024-07-15 20:11:23.229471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.005 [2024-07-15 20:11:23.237485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.005 [2024-07-15 20:11:23.237492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.005 [2024-07-15 20:11:23.245505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.005 [2024-07-15 20:11:23.245512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.005 [2024-07-15 20:11:23.247126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.005 [2024-07-15 20:11:23.253527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.005 [2024-07-15 20:11:23.253539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.005 [2024-07-15 20:11:23.261546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.005 [2024-07-15 20:11:23.261554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.005 [2024-07-15 20:11:23.269568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.005 [2024-07-15 20:11:23.269576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.005 [2024-07-15 20:11:23.277588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.005 [2024-07-15 20:11:23.277597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.005 [2024-07-15 20:11:23.285609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.285620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.293628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.293637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.301648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.301656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.309668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.309676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.311632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.006 [2024-07-15 20:11:23.317690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.317697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.325715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.325727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.333737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.333747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.341753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.341761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.349774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.349782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.357795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.357802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.365815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.365822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.373834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.373842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.381858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.381868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.389883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.389894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.397898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.397907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.405918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.405927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.413939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.413949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.421962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.421972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.429981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.429988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.006 [2024-07-15 20:11:23.438003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.006 [2024-07-15 20:11:23.438010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.446025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.446032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.454047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.454054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.462069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.462077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.470091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.470099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.478111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.478118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.486136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.486143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.494158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.494165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.502174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.502181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.510195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.510204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.518215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.518221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.526235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.526241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.534255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.534261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.542277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.542283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.550299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.550306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.558321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.558327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.605695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.605708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.610457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.610466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 Running I/O for 5 seconds... 00:16:26.267 [2024-07-15 20:11:23.618475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.618482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.634908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.634924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.644835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.644850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.653845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.653861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.662879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.662895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.671601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.671616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.680464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.680478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.689702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.689716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.267 [2024-07-15 20:11:23.698580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.267 [2024-07-15 20:11:23.698594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.707445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.707460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.716520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.716534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.725264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.725278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.734263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.734277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.743215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.743230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.752025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.752039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.761054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.761068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.769745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.769759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.778573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.778587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.787497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.787511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.796422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.796436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.805313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.805328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.814077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.814091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.822930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.822944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.831818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.831833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.840735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.840750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.848344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.848359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.857213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.857228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.865540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.865554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.874531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.874545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.883365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.883379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.892376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.892391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.900731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.900745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.909403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.909417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.918447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.918462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.927377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.927394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.936267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.936281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.945108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.945128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.529 [2024-07-15 20:11:23.953742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.529 [2024-07-15 20:11:23.953755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.790 [2024-07-15 20:11:23.962396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.790 [2024-07-15 20:11:23.962410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:23.971465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:23.971479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:23.980217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:23.980231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:23.989106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:23.989120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:23.997831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:23.997845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.006530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.006544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.015319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.015333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.023775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.023789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.032596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.032611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.041460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.041474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.050232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.050246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.058737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.058751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.067292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.067306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.076443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.076457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.084932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.084946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.093714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.093732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.102482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.102496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.111417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.111431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.119929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.119944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.129017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.129031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.137650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.137663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.146566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.146580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.155449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.155463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.163491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.163504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.172435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.172450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.181254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.181268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.190069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.190082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.198576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.198590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.207410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.207424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.791 [2024-07-15 20:11:24.216232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.791 [2024-07-15 20:11:24.216247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.224930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.224944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.233572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.233586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.242443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.242457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.251257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.251270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.259683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.259700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.268420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.268435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.277313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.277327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.285911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.285926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.294757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.294771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.303266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.303280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.311847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.311861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.320319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.320333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.329017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.329031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.337563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.337577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.346537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.346551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.355216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.355230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.364216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.364231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.372849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.372863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.381371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.381385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.390180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.390194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.399133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.399147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.408112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.408131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.416269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.416283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.424894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.424912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.433557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.433571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.442248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.442262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.451301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.451315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.459773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.459787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.468499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.468513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.052 [2024-07-15 20:11:24.477487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.052 [2024-07-15 20:11:24.477502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.485965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.485979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.494859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.494873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.503267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.503281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.511761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.511776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.520394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.520409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.529219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.529234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.538109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.538128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.546768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.546783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.555723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.555737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.564533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.564548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.572738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.572753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.581316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.581331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.589662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.589677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.598721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.598736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.607241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.607255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.616296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.616310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.625247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.625262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.633951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.633965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.642652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.642667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.651382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.651396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.659808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.659822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.668315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.668329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.676892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.676907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.685436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-07-15 20:11:24.685450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-07-15 20:11:24.694254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.315 [2024-07-15 20:11:24.694268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.315 [2024-07-15 20:11:24.703002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.315 [2024-07-15 20:11:24.703016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.315 [2024-07-15 20:11:24.711970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.315 [2024-07-15 20:11:24.711985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.315 [2024-07-15 20:11:24.720762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.315 [2024-07-15 20:11:24.720776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.315 [2024-07-15 20:11:24.729705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.315 [2024-07-15 20:11:24.729719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.315 [2024-07-15 20:11:24.738594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.315 [2024-07-15 20:11:24.738609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.315 [2024-07-15 20:11:24.746935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.315 [2024-07-15 20:11:24.746950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.755469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.755484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.764140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.764154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.772936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.772950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.781339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.781354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.790226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.790241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.798860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.798874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.807577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.807591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.816091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.816106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.824723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.824737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.833692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.833707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.842486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.842500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.850944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.850958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.859723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.859737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.868255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.868270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.876779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.876793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.885812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.885826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.894731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.894745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.903545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.903560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.912331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.912346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.920970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.920984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.929364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.929378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.938117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.938137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.946929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.946944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.955841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.955856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.964374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.964388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.973406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.973421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.981862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.981876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.990852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.990866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:24.999749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:24.999763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.576 [2024-07-15 20:11:25.008095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.576 [2024-07-15 20:11:25.008109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.017164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.017178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.025461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.025476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.033982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.033996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.042381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.042395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.051413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.051427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.060284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.060299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.069104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.069119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.077908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.077922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.086574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.086589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.095100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.095114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.103230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.103245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.111747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.111761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.120751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.120765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.129724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.129739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.138626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.138641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.146680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.146694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.155726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.155741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.164602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.164617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.173028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.173042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.181648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.181662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.190688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.190703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.198686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.198700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.207493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.207507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.216169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.216184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.224993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.225008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.233942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.233956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.242567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.242585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.838 [2024-07-15 20:11:25.251523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.838 [2024-07-15 20:11:25.251537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.839 [2024-07-15 20:11:25.260524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.839 [2024-07-15 20:11:25.260538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.839 [2024-07-15 20:11:25.269050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.839 [2024-07-15 20:11:25.269064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.277311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.277325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.285765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.285779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.294095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.294109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.302541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.302555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.311443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.311457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.319896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.319910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.328418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.328432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.337470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.337485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.346066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.346080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.354664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.354678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.363305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.363319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.371967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.371981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.380709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.380723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.389603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.389617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.397987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.398001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.406172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.406189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.415035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.415049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.423907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.423921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.432413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.432427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.441284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.441298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.449412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.449426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.458371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.458385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.466647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.466661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.475588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.475602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.484458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.484472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.493244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.493258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.501855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.501869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.510534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.510548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.519424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.519437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.101 [2024-07-15 20:11:25.528587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.101 [2024-07-15 20:11:25.528601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.537437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.537451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.546557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.546570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.555417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.555431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.564187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.564201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.572801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.572818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.581580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.581595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.590463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.590477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.599474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.599488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.608307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.608321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.617148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.617162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.626129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.626144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.634486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.634500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.643549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.643563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.651869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.651883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.660409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.660423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.668962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.668976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.677533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.677547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.685955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.685969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.694872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.694886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.704194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.704208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.712294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.712308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.720885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.720899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.729572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.729586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.738069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.738086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.746789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.746803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.755503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.755517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.764220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.764234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.773248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.773262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.782024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.782038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.363 [2024-07-15 20:11:25.790929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.363 [2024-07-15 20:11:25.790943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.799928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.799942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.808645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.808658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.817583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.817597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.826463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.826478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.834658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.834672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.843186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.843199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.852285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.852299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.860511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.860524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.869230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.869244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.877937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.877951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.885912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.885926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.894226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.894240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.903185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.903199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.912034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.912048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.920603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.920617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.928861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.928875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.937787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.937801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.946653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.946667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.955494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.955508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.964396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.964409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.972965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.972979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.981924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.981938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.990768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.990782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:25.999444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:25.999458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:26.008294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:26.008308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:26.017288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:26.017302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:26.026056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:26.026069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:26.035026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:26.035041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:26.043560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:26.043574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.625 [2024-07-15 20:11:26.052262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.625 [2024-07-15 20:11:26.052276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.886 [2024-07-15 20:11:26.060914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.886 [2024-07-15 20:11:26.060929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.886 [2024-07-15 20:11:26.069611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.886 [2024-07-15 20:11:26.069625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.886 [2024-07-15 20:11:26.078406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.886 [2024-07-15 20:11:26.078420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.886 [2024-07-15 20:11:26.087299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.886 [2024-07-15 20:11:26.087313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.886 [2024-07-15 20:11:26.096082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.886 [2024-07-15 20:11:26.096096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.886 [2024-07-15 20:11:26.104717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.886 [2024-07-15 20:11:26.104731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.886 [2024-07-15 20:11:26.113418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.886 [2024-07-15 20:11:26.113431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.886 [2024-07-15 20:11:26.122222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.122237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.131082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.131096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.139811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.139825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.148242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.148256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.156772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.156786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.165307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.165320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.174036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.174050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.182721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.182735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.191147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.191161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.199710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.199724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.208915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.208929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.217929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.217944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.226942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.226955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.235821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.235836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.244614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.244628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.253502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.253516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.262236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.262250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.270555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.270569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.279148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.279163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.288029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.288044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.296922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.296936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.305646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.305661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.887 [2024-07-15 20:11:26.314306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.887 [2024-07-15 20:11:26.314322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.148 [2024-07-15 20:11:26.322911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.148 [2024-07-15 20:11:26.322926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.331637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.331651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.340935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.340949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.349742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.349756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.358731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.358745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.367524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.367538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.376672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.376687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.385416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.385430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.394178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.394194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.403109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.403127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.411950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.411965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.420438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.420452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.429340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.429354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.438158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.438173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.446392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.446407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.455051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.455066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.463767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.463781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.472703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.472718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.481583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.481598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.490274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.490289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.499228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.499243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.507513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.507527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.516060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.516075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.524419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.524433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.532927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.532942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.541462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.541476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.550514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.550528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.559414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.559433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.567813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.567828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.149 [2024-07-15 20:11:26.576344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.149 [2024-07-15 20:11:26.576358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.410 [2024-07-15 20:11:26.585134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.410 [2024-07-15 20:11:26.585149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.410 [2024-07-15 20:11:26.594069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.410 [2024-07-15 20:11:26.594084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.602761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.602776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.611461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.611475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.620567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.620582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.628775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.628789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.637437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.637452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.646152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.646166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.654813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.654827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.663230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.663245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.672072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.672086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.680735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.680749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.689400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.689414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.698207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.698221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.706996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.707011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.715829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.715843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.724588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.724606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.733359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.733374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.742362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.742377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.750528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.750542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.759254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.759268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.768393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.768407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.777519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.777533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.786062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.786077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.795006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.795020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.803941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.803955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.812669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.812683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.821377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.821392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.829907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.829921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.411 [2024-07-15 20:11:26.838530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.411 [2024-07-15 20:11:26.838544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:26.847502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:26.847517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:26.855854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:26.855869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:26.864505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:26.864519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:26.873069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:26.873083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:26.881397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:26.881411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:26.889994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:26.890014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:26.898968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:26.898983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:26.907584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:26.907598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:26.916445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:26.916460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:26.925157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:26.925171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:26.934003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:26.934017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:26.943078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:26.943093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:26.952056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:26.952070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:26.960905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:26.960919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:26.969947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:26.969961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:26.978678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:26.978692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:26.987207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:26.987220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:26.995977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:26.995991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:27.004171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:27.004185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:27.013143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:27.013157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:27.021778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:27.021792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:27.030545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:27.030559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:27.039419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:27.039434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:27.048185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:27.048200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:27.056932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:27.056950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:27.065712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:27.065726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:27.074615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:27.074629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:27.083425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:27.083439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:27.096545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:27.096560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.676 [2024-07-15 20:11:27.104277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.676 [2024-07-15 20:11:27.104292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.112863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.112877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.121463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.121477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.130171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.130185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.138960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.138974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.147687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.147701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.156511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.156525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.165223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.165237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.174031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.174046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.182384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.182399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.190855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.190869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.199459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.199473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.208543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.208557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.217374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.217389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.226115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.226134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.234769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.234783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.243495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.243509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.252230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.252244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.260880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.260894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.269465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.269479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.278163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.278177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.286689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.286703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.295403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.295417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.304260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.304274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.313185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.313199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.322296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.322311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.331022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.331037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.339663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.339677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.348864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.348879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.357728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.357742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.366731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.366746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.375064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.375079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.383478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.383492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.392557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.392571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.400323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.400337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.409330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.409344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.417295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.417309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.425998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.426012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.008 [2024-07-15 20:11:27.434621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.008 [2024-07-15 20:11:27.434635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.269 [2024-07-15 20:11:27.443484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.269 [2024-07-15 20:11:27.443498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.269 [2024-07-15 20:11:27.452356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.269 [2024-07-15 20:11:27.452370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.269 [2024-07-15 20:11:27.460918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.269 [2024-07-15 20:11:27.460932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.269 [2024-07-15 20:11:27.469813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.269 [2024-07-15 20:11:27.469826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.269 [2024-07-15 20:11:27.479039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.269 [2024-07-15 20:11:27.479054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.269 [2024-07-15 20:11:27.486686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.269 [2024-07-15 20:11:27.486699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.269 [2024-07-15 20:11:27.495516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.269 [2024-07-15 20:11:27.495530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.269 [2024-07-15 20:11:27.504302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.269 [2024-07-15 20:11:27.504316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.269 [2024-07-15 20:11:27.512783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.269 [2024-07-15 20:11:27.512797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.269 [2024-07-15 20:11:27.521440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.269 [2024-07-15 20:11:27.521454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.269 [2024-07-15 20:11:27.530321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.269 [2024-07-15 20:11:27.530336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.539176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.539191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.548092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.548106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.556885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.556899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.565264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.565278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.573814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.573829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.582447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.582462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.590940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.590953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.599818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.599832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.608288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.608302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.616840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.616854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.625297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.625311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.633739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.633753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.642383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.642397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.651027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.651042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.659959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.659973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.668498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.668511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.677435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.677450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.686433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.686447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.270 [2024-07-15 20:11:27.694505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.270 [2024-07-15 20:11:27.694519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.703089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.703103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.711921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.711935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.720411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.720425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.729397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.729410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.738283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.738297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.747235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.747249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.756234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.756248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.764345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.764359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.772915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.772929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.781430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.781444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.790362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.790376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.799269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.799282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.808237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.808251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.817063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.817077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.825564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.825578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.834196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.834210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.842739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.842754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.851413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.851426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.860179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.860193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.868638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.868652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.877554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.877571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.885606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.885619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.894644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.894659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.552 [2024-07-15 20:11:27.903401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.552 [2024-07-15 20:11:27.903415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.553 [2024-07-15 20:11:27.911934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.553 [2024-07-15 20:11:27.911949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.553 [2024-07-15 20:11:27.920898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.553 [2024-07-15 20:11:27.920912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.553 [2024-07-15 20:11:27.929307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.553 [2024-07-15 20:11:27.929322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.553 [2024-07-15 20:11:27.938047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.553 [2024-07-15 20:11:27.938061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.553 [2024-07-15 20:11:27.947072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.553 [2024-07-15 20:11:27.947086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.553 [2024-07-15 20:11:27.955832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.553 [2024-07-15 20:11:27.955847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.553 [2024-07-15 20:11:27.964077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.553 [2024-07-15 20:11:27.964091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.553 [2024-07-15 20:11:27.972797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.553 [2024-07-15 20:11:27.972812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.553 [2024-07-15 20:11:27.981492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.553 [2024-07-15 20:11:27.981507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.813 [2024-07-15 20:11:27.990008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.813 [2024-07-15 20:11:27.990023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.813 [2024-07-15 20:11:27.998876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.813 [2024-07-15 20:11:27.998890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.813 [2024-07-15 20:11:28.007632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.813 [2024-07-15 20:11:28.007647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.813 [2024-07-15 20:11:28.016468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.813 [2024-07-15 20:11:28.016482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.813 [2024-07-15 20:11:28.025151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.813 [2024-07-15 20:11:28.025165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.813 [2024-07-15 20:11:28.033501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.813 [2024-07-15 20:11:28.033515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.813 [2024-07-15 20:11:28.042103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.813 [2024-07-15 20:11:28.042126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.813 [2024-07-15 20:11:28.050644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.813 [2024-07-15 20:11:28.050659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.813 [2024-07-15 20:11:28.059307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.059321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.068277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.068291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.076486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.076501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.085083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.085098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.093991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.094006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.102917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.102932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.111800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.111814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.120344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.120359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.129060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.129074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.137307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.137321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.145892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.145906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.154545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.154559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.162989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.163003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.171511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.171525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.180435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.180450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.188948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.188961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.197746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.197760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.206098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.206115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.214683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.214697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.223572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.223586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.232321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.232335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.814 [2024-07-15 20:11:28.241118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.814 [2024-07-15 20:11:28.241137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.074 [2024-07-15 20:11:28.249383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.074 [2024-07-15 20:11:28.249397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.074 [2024-07-15 20:11:28.257648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.074 [2024-07-15 20:11:28.257663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.074 [2024-07-15 20:11:28.266506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.074 [2024-07-15 20:11:28.266520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.074 [2024-07-15 20:11:28.275323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.074 [2024-07-15 20:11:28.275337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.074 [2024-07-15 20:11:28.283856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.283871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.292566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.292580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.301671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.301685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.310407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.310421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.319374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.319388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.328007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.328022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.337009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.337023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.345083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.345098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.353693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.353708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.362874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.362888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.371751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.371770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.380582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.380596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.388965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.388980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.397972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.397987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.406579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.406593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.415261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.415275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.423571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.423586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.432378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.432392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.441283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.441297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.450070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.450085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.458932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.458947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.467792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.467807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.476787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.476802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.485643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.485658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.494505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.494520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.075 [2024-07-15 20:11:28.503101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.075 [2024-07-15 20:11:28.503116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.511972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.511987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.519613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.519627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.528448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.528462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.537367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.537381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.546045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.546058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.554915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.554929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.563830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.563845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.572629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.572643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.581641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.581656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.590227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.590242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.598671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.598685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.607666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.607681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.616634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.616648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.625136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.625151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.633024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.633038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 00:16:31.335 Latency(us) 00:16:31.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.335 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:31.335 Nvme1n1 : 5.01 19821.08 154.85 0.00 0.00 6450.37 2402.99 21736.11 00:16:31.335 =================================================================================================================== 00:16:31.335 Total : 19821.08 154.85 0.00 0.00 6450.37 2402.99 21736.11 00:16:31.335 [2024-07-15 20:11:28.639394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.639406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.647414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.647425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.655436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.655446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.663459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.663468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.671490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.671500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.679498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.679507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.687517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.687524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.695535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.695543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.703556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.703563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.711575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.711582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.719596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.335 [2024-07-15 20:11:28.719603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.335 [2024-07-15 20:11:28.727621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.336 [2024-07-15 20:11:28.727630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.336 [2024-07-15 20:11:28.735639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.336 [2024-07-15 20:11:28.735647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.336 [2024-07-15 20:11:28.743659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.336 [2024-07-15 20:11:28.743666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.336 [2024-07-15 20:11:28.751681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.336 [2024-07-15 20:11:28.751689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.336 [2024-07-15 20:11:28.759700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.336 [2024-07-15 20:11:28.759708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.336 [2024-07-15 20:11:28.767720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.336 [2024-07-15 20:11:28.767727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (955834) - No such process 00:16:31.596 20:11:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 955834 00:16:31.596 20:11:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.596 20:11:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.596 20:11:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:31.596 20:11:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.596 20:11:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:31.596 20:11:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.596 20:11:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:31.596 delay0 00:16:31.596 20:11:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.596 20:11:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:31.596 20:11:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.596 20:11:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:31.596 20:11:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.596 20:11:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:31.596 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.596 [2024-07-15 20:11:28.945344] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:38.187 Initializing NVMe Controllers 00:16:38.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:38.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:38.187 Initialization complete. Launching workers. 00:16:38.187 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 245 00:16:38.187 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 532, failed to submit 33 00:16:38.187 success 346, unsuccess 186, failed 0 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:38.187 rmmod nvme_tcp 00:16:38.187 rmmod nvme_fabrics 00:16:38.187 rmmod nvme_keyring 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 953650 ']' 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 953650 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 953650 ']' 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 953650 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 953650 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 953650' 00:16:38.187 killing process with pid 953650 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 953650 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 953650 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.187 20:11:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.102 20:11:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:40.102 00:16:40.102 real 0m32.945s 00:16:40.102 user 0m45.126s 00:16:40.102 sys 0m10.087s 00:16:40.102 20:11:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:40.102 20:11:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:40.102 ************************************ 00:16:40.102 END TEST nvmf_zcopy 00:16:40.102 ************************************ 00:16:40.102 20:11:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:40.102 20:11:37 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:40.102 20:11:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:40.102 20:11:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:40.102 20:11:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:40.102 ************************************ 00:16:40.102 START TEST nvmf_nmic 00:16:40.102 ************************************ 00:16:40.102 20:11:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:40.102 * Looking for test storage... 00:16:40.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:40.363 20:11:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:40.363 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:40.363 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.363 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.363 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.363 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.363 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.363 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.363 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.363 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:40.364 20:11:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:48.506 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:48.506 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:48.506 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:48.506 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:48.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:16:48.506 00:16:48.506 --- 10.0.0.2 ping statistics --- 00:16:48.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.506 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:48.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.368 ms 00:16:48.506 00:16:48.506 --- 10.0.0.1 ping statistics --- 00:16:48.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.506 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:48.506 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.507 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:48.507 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:48.507 20:11:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:48.507 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:48.507 20:11:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:48.507 20:11:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:48.507 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=962347 00:16:48.507 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 962347 00:16:48.507 20:11:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:48.507 20:11:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 962347 ']' 00:16:48.507 20:11:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.507 20:11:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.507 20:11:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.507 20:11:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.507 20:11:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:48.507 [2024-07-15 20:11:44.872698] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:16:48.507 [2024-07-15 20:11:44.872765] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.507 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.507 [2024-07-15 20:11:44.946398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:48.507 [2024-07-15 20:11:45.021990] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.507 [2024-07-15 20:11:45.022028] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.507 [2024-07-15 20:11:45.022036] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.507 [2024-07-15 20:11:45.022043] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.507 [2024-07-15 20:11:45.022048] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.507 [2024-07-15 20:11:45.022159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.507 [2024-07-15 20:11:45.022376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.507 [2024-07-15 20:11:45.022378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:48.507 [2024-07-15 20:11:45.022228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:48.507 [2024-07-15 20:11:45.703808] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:48.507 Malloc0 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:48.507 [2024-07-15 20:11:45.763256] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:48.507 test case1: single bdev can't be used in multiple subsystems 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:48.507 [2024-07-15 20:11:45.799192] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:48.507 [2024-07-15 20:11:45.799213] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:48.507 [2024-07-15 20:11:45.799220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.507 request: 00:16:48.507 { 00:16:48.507 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:48.507 "namespace": { 00:16:48.507 "bdev_name": "Malloc0", 00:16:48.507 "no_auto_visible": false 00:16:48.507 }, 00:16:48.507 "method": "nvmf_subsystem_add_ns", 00:16:48.507 "req_id": 1 00:16:48.507 } 00:16:48.507 Got JSON-RPC error response 00:16:48.507 response: 00:16:48.507 { 00:16:48.507 "code": -32602, 00:16:48.507 "message": "Invalid parameters" 00:16:48.507 } 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:48.507 Adding namespace failed - expected result. 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:48.507 test case2: host connect to nvmf target in multiple paths 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:48.507 [2024-07-15 20:11:45.811312] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.507 20:11:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:49.891 20:11:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:51.800 20:11:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:51.800 20:11:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:51.800 20:11:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:51.800 20:11:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:51.800 20:11:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:16:53.745 20:11:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:53.745 20:11:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:53.745 20:11:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:53.745 20:11:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:53.745 20:11:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:53.745 20:11:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:16:53.745 20:11:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:53.745 [global] 00:16:53.745 thread=1 00:16:53.745 invalidate=1 00:16:53.745 rw=write 00:16:53.745 time_based=1 00:16:53.745 runtime=1 00:16:53.746 ioengine=libaio 00:16:53.746 direct=1 00:16:53.746 bs=4096 00:16:53.746 iodepth=1 00:16:53.746 norandommap=0 00:16:53.746 numjobs=1 00:16:53.746 00:16:53.746 verify_dump=1 00:16:53.746 verify_backlog=512 00:16:53.746 verify_state_save=0 00:16:53.746 do_verify=1 00:16:53.746 verify=crc32c-intel 00:16:53.746 [job0] 00:16:53.746 filename=/dev/nvme0n1 00:16:53.746 Could not set queue depth (nvme0n1) 00:16:54.013 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.013 fio-3.35 00:16:54.013 Starting 1 thread 00:16:55.395 00:16:55.395 job0: (groupid=0, jobs=1): err= 0: pid=963863: Mon Jul 15 20:11:52 2024 00:16:55.395 read: IOPS=12, BW=51.6KiB/s (52.8kB/s)(52.0KiB/1008msec) 00:16:55.395 slat (nsec): min=26251, max=28611, avg=26679.46, stdev=633.78 00:16:55.395 clat (usec): min=41340, max=42007, avg=41916.38, stdev=176.42 00:16:55.395 lat (usec): min=41366, max=42033, avg=41943.06, stdev=176.52 00:16:55.395 clat percentiles (usec): 00:16:55.396 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:16:55.396 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:55.396 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:55.396 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:55.396 | 99.99th=[42206] 00:16:55.396 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:16:55.396 slat (usec): min=9, max=26723, avg=85.45, stdev=1179.57 00:16:55.396 clat (usec): min=529, max=991, avg=811.12, stdev=78.13 00:16:55.396 lat (usec): min=558, max=27472, avg=896.57, stdev=1179.55 00:16:55.396 clat percentiles (usec): 00:16:55.396 | 1.00th=[ 619], 5.00th=[ 668], 10.00th=[ 717], 20.00th=[ 750], 00:16:55.396 | 30.00th=[ 766], 40.00th=[ 791], 50.00th=[ 824], 60.00th=[ 848], 00:16:55.396 | 70.00th=[ 865], 80.00th=[ 873], 90.00th=[ 898], 95.00th=[ 930], 00:16:55.396 | 99.00th=[ 963], 99.50th=[ 971], 99.90th=[ 996], 99.95th=[ 996], 00:16:55.396 | 99.99th=[ 996] 00:16:55.396 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:55.396 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:55.396 lat (usec) : 750=19.62%, 1000=77.90% 00:16:55.396 lat (msec) : 50=2.48% 00:16:55.396 cpu : usr=1.29%, sys=1.89%, ctx=528, majf=0, minf=1 00:16:55.396 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.396 issued rwts: total=13,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.396 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.396 00:16:55.396 Run status group 0 (all jobs): 00:16:55.396 READ: bw=51.6KiB/s (52.8kB/s), 51.6KiB/s-51.6KiB/s (52.8kB/s-52.8kB/s), io=52.0KiB (53.2kB), run=1008-1008msec 00:16:55.396 WRITE: bw=2032KiB/s (2081kB/s), 2032KiB/s-2032KiB/s (2081kB/s-2081kB/s), io=2048KiB (2097kB), run=1008-1008msec 00:16:55.396 00:16:55.396 Disk stats (read/write): 00:16:55.396 nvme0n1: ios=35/512, merge=0/0, ticks=1386/324, in_queue=1710, util=98.90% 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:55.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:55.396 rmmod nvme_tcp 00:16:55.396 rmmod nvme_fabrics 00:16:55.396 rmmod nvme_keyring 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 962347 ']' 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 962347 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 962347 ']' 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 962347 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 962347 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 962347' 00:16:55.396 killing process with pid 962347 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 962347 00:16:55.396 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 962347 00:16:55.657 20:11:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:55.657 20:11:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:55.657 20:11:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:55.657 20:11:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:55.657 20:11:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:55.657 20:11:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.657 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:55.657 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.616 20:11:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:57.616 00:16:57.616 real 0m17.496s 00:16:57.616 user 0m47.634s 00:16:57.616 sys 0m6.172s 00:16:57.616 20:11:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:57.616 20:11:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:57.616 ************************************ 00:16:57.616 END TEST nvmf_nmic 00:16:57.616 ************************************ 00:16:57.616 20:11:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:57.616 20:11:54 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:57.616 20:11:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:57.616 20:11:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:57.616 20:11:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:57.616 ************************************ 00:16:57.616 START TEST nvmf_fio_target 00:16:57.616 ************************************ 00:16:57.616 20:11:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:57.877 * Looking for test storage... 00:16:57.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.877 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:57.878 20:11:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:04.467 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:04.467 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:04.467 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:04.467 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.467 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:04.468 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:04.468 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:04.468 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:04.468 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:04.468 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:04.468 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.468 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:04.468 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:04.468 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:04.468 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:04.728 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:04.728 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:04.728 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:04.728 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:04.728 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:04.728 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:04.728 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:04.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:17:04.728 00:17:04.728 --- 10.0.0.2 ping statistics --- 00:17:04.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.728 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:17:04.728 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:04.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:17:04.989 00:17:04.989 --- 10.0.0.1 ping statistics --- 00:17:04.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.989 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=968330 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 968330 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 968330 ']' 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.989 20:12:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.989 [2024-07-15 20:12:02.271866] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:17:04.989 [2024-07-15 20:12:02.271929] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.989 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.989 [2024-07-15 20:12:02.344858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:04.989 [2024-07-15 20:12:02.419880] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.989 [2024-07-15 20:12:02.419921] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.989 [2024-07-15 20:12:02.419929] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.989 [2024-07-15 20:12:02.419935] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.989 [2024-07-15 20:12:02.419941] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.989 [2024-07-15 20:12:02.420082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.989 [2024-07-15 20:12:02.420200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.989 [2024-07-15 20:12:02.420517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:04.989 [2024-07-15 20:12:02.420518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.930 20:12:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.930 20:12:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:17:05.930 20:12:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:05.930 20:12:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:05.930 20:12:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.930 20:12:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.930 20:12:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:05.930 [2024-07-15 20:12:03.239203] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.930 20:12:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:06.190 20:12:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:06.190 20:12:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:06.455 20:12:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:06.455 20:12:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:06.455 20:12:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:06.455 20:12:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:06.715 20:12:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:06.715 20:12:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:06.976 20:12:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:06.976 20:12:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:06.976 20:12:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.237 20:12:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:07.237 20:12:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.497 20:12:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:07.497 20:12:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:07.497 20:12:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:07.758 20:12:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:07.759 20:12:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:07.759 20:12:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:07.759 20:12:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:08.019 20:12:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:08.280 [2024-07-15 20:12:05.488895] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.280 20:12:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:08.280 20:12:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:08.541 20:12:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:10.455 20:12:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:10.455 20:12:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:10.455 20:12:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:10.455 20:12:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:10.455 20:12:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:10.455 20:12:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:12.373 20:12:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:12.373 20:12:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:12.373 20:12:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:12.373 20:12:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:12.373 20:12:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:12.373 20:12:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:12.373 20:12:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:12.373 [global] 00:17:12.373 thread=1 00:17:12.373 invalidate=1 00:17:12.373 rw=write 00:17:12.373 time_based=1 00:17:12.373 runtime=1 00:17:12.373 ioengine=libaio 00:17:12.373 direct=1 00:17:12.373 bs=4096 00:17:12.373 iodepth=1 00:17:12.373 norandommap=0 00:17:12.373 numjobs=1 00:17:12.373 00:17:12.373 verify_dump=1 00:17:12.373 verify_backlog=512 00:17:12.373 verify_state_save=0 00:17:12.373 do_verify=1 00:17:12.373 verify=crc32c-intel 00:17:12.373 [job0] 00:17:12.373 filename=/dev/nvme0n1 00:17:12.373 [job1] 00:17:12.373 filename=/dev/nvme0n2 00:17:12.373 [job2] 00:17:12.373 filename=/dev/nvme0n3 00:17:12.373 [job3] 00:17:12.373 filename=/dev/nvme0n4 00:17:12.373 Could not set queue depth (nvme0n1) 00:17:12.373 Could not set queue depth (nvme0n2) 00:17:12.373 Could not set queue depth (nvme0n3) 00:17:12.373 Could not set queue depth (nvme0n4) 00:17:12.633 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:12.633 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:12.633 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:12.633 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:12.633 fio-3.35 00:17:12.633 Starting 4 threads 00:17:14.055 00:17:14.055 job0: (groupid=0, jobs=1): err= 0: pid=970160: Mon Jul 15 20:12:11 2024 00:17:14.055 read: IOPS=19, BW=79.1KiB/s (81.0kB/s)(80.0KiB/1011msec) 00:17:14.055 slat (nsec): min=24723, max=27514, avg=25575.80, stdev=781.04 00:17:14.055 clat (usec): min=1224, max=42024, avg=39547.48, stdev=9033.22 00:17:14.055 lat (usec): min=1251, max=42050, avg=39573.05, stdev=9032.80 00:17:14.055 clat percentiles (usec): 00:17:14.055 | 1.00th=[ 1221], 5.00th=[ 1221], 10.00th=[40633], 20.00th=[41157], 00:17:14.055 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:17:14.055 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:14.055 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:14.055 | 99.99th=[42206] 00:17:14.055 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:17:14.055 slat (nsec): min=9322, max=52884, avg=26753.79, stdev=10177.99 00:17:14.055 clat (usec): min=150, max=3896, avg=394.88, stdev=181.72 00:17:14.055 lat (usec): min=160, max=3928, avg=421.63, stdev=183.81 00:17:14.055 clat percentiles (usec): 00:17:14.055 | 1.00th=[ 167], 5.00th=[ 219], 10.00th=[ 273], 20.00th=[ 302], 00:17:14.055 | 30.00th=[ 330], 40.00th=[ 367], 50.00th=[ 404], 60.00th=[ 424], 00:17:14.055 | 70.00th=[ 441], 80.00th=[ 465], 90.00th=[ 498], 95.00th=[ 529], 00:17:14.055 | 99.00th=[ 627], 99.50th=[ 725], 99.90th=[ 3884], 99.95th=[ 3884], 00:17:14.055 | 99.99th=[ 3884] 00:17:14.055 bw ( KiB/s): min= 4096, max= 4096, per=51.75%, avg=4096.00, stdev= 0.00, samples=1 00:17:14.055 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:14.055 lat (usec) : 250=6.02%, 500=81.95%, 750=7.89%, 1000=0.19% 00:17:14.055 lat (msec) : 2=0.19%, 4=0.19%, 50=3.57% 00:17:14.055 cpu : usr=0.79%, sys=1.29%, ctx=532, majf=0, minf=1 00:17:14.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.055 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.055 job1: (groupid=0, jobs=1): err= 0: pid=970161: Mon Jul 15 20:12:11 2024 00:17:14.055 read: IOPS=13, BW=55.0KiB/s (56.3kB/s)(56.0KiB/1018msec) 00:17:14.055 slat (nsec): min=24968, max=25514, avg=25166.86, stdev=169.23 00:17:14.055 clat (usec): min=41156, max=42054, avg=41916.56, stdev=222.71 00:17:14.055 lat (usec): min=41181, max=42079, avg=41941.73, stdev=222.77 00:17:14.055 clat percentiles (usec): 00:17:14.055 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:17:14.055 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:14.055 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:14.055 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:14.055 | 99.99th=[42206] 00:17:14.055 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:17:14.055 slat (nsec): min=9622, max=70597, avg=30900.60, stdev=8184.67 00:17:14.055 clat (usec): min=424, max=1179, avg=800.84, stdev=123.49 00:17:14.055 lat (usec): min=444, max=1196, avg=831.74, stdev=125.89 00:17:14.055 clat percentiles (usec): 00:17:14.055 | 1.00th=[ 474], 5.00th=[ 570], 10.00th=[ 635], 20.00th=[ 693], 00:17:14.055 | 30.00th=[ 750], 40.00th=[ 783], 50.00th=[ 807], 60.00th=[ 848], 00:17:14.055 | 70.00th=[ 881], 80.00th=[ 906], 90.00th=[ 955], 95.00th=[ 979], 00:17:14.055 | 99.00th=[ 1029], 99.50th=[ 1057], 99.90th=[ 1172], 99.95th=[ 1172], 00:17:14.055 | 99.99th=[ 1172] 00:17:14.055 bw ( KiB/s): min= 4096, max= 4096, per=51.75%, avg=4096.00, stdev= 0.00, samples=1 00:17:14.055 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:14.055 lat (usec) : 500=1.52%, 750=28.71%, 1000=63.88% 00:17:14.055 lat (msec) : 2=3.23%, 50=2.66% 00:17:14.055 cpu : usr=0.98%, sys=1.38%, ctx=528, majf=0, minf=1 00:17:14.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.055 issued rwts: total=14,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.055 job2: (groupid=0, jobs=1): err= 0: pid=970193: Mon Jul 15 20:12:11 2024 00:17:14.055 read: IOPS=13, BW=55.5KiB/s (56.8kB/s)(56.0KiB/1009msec) 00:17:14.055 slat (nsec): min=26345, max=27435, avg=26908.36, stdev=345.42 00:17:14.055 clat (usec): min=41888, max=42002, avg=41963.06, stdev=32.54 00:17:14.055 lat (usec): min=41914, max=42029, avg=41989.96, stdev=32.59 00:17:14.055 clat percentiles (usec): 00:17:14.055 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:14.055 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:14.055 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:14.055 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:14.055 | 99.99th=[42206] 00:17:14.055 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:17:14.055 slat (nsec): min=8672, max=71172, avg=32614.59, stdev=9376.09 00:17:14.055 clat (usec): min=254, max=1226, avg=777.86, stdev=180.13 00:17:14.055 lat (usec): min=263, max=1261, avg=810.48, stdev=185.73 00:17:14.055 clat percentiles (usec): 00:17:14.055 | 1.00th=[ 265], 5.00th=[ 351], 10.00th=[ 494], 20.00th=[ 668], 00:17:14.055 | 30.00th=[ 742], 40.00th=[ 783], 50.00th=[ 816], 60.00th=[ 857], 00:17:14.055 | 70.00th=[ 889], 80.00th=[ 922], 90.00th=[ 963], 95.00th=[ 988], 00:17:14.055 | 99.00th=[ 1037], 99.50th=[ 1057], 99.90th=[ 1221], 99.95th=[ 1221], 00:17:14.055 | 99.99th=[ 1221] 00:17:14.055 bw ( KiB/s): min= 4096, max= 4096, per=51.75%, avg=4096.00, stdev= 0.00, samples=1 00:17:14.055 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:14.055 lat (usec) : 500=10.08%, 750=21.10%, 1000=63.31% 00:17:14.055 lat (msec) : 2=2.85%, 50=2.66% 00:17:14.055 cpu : usr=0.99%, sys=2.18%, ctx=528, majf=0, minf=1 00:17:14.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.055 issued rwts: total=14,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.055 job3: (groupid=0, jobs=1): err= 0: pid=970205: Mon Jul 15 20:12:11 2024 00:17:14.055 read: IOPS=17, BW=69.6KiB/s (71.2kB/s)(72.0KiB/1035msec) 00:17:14.055 slat (nsec): min=24435, max=25331, avg=24940.22, stdev=266.40 00:17:14.055 clat (usec): min=40950, max=42938, avg=42015.54, stdev=410.13 00:17:14.055 lat (usec): min=40975, max=42963, avg=42040.48, stdev=410.04 00:17:14.055 clat percentiles (usec): 00:17:14.055 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:17:14.055 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:14.055 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:17:14.055 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:14.055 | 99.99th=[42730] 00:17:14.055 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:17:14.055 slat (nsec): min=9609, max=71806, avg=29392.56, stdev=8458.29 00:17:14.055 clat (usec): min=224, max=2818, avg=505.86, stdev=162.86 00:17:14.055 lat (usec): min=257, max=2850, avg=535.25, stdev=164.72 00:17:14.055 clat percentiles (usec): 00:17:14.055 | 1.00th=[ 277], 5.00th=[ 322], 10.00th=[ 351], 20.00th=[ 412], 00:17:14.055 | 30.00th=[ 437], 40.00th=[ 461], 50.00th=[ 486], 60.00th=[ 519], 00:17:14.055 | 70.00th=[ 553], 80.00th=[ 594], 90.00th=[ 652], 95.00th=[ 693], 00:17:14.055 | 99.00th=[ 857], 99.50th=[ 1254], 99.90th=[ 2835], 99.95th=[ 2835], 00:17:14.055 | 99.99th=[ 2835] 00:17:14.055 bw ( KiB/s): min= 4096, max= 4096, per=51.75%, avg=4096.00, stdev= 0.00, samples=1 00:17:14.055 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:14.055 lat (usec) : 250=0.57%, 500=53.02%, 750=40.57%, 1000=1.89% 00:17:14.055 lat (msec) : 2=0.38%, 4=0.19%, 50=3.40% 00:17:14.055 cpu : usr=0.68%, sys=1.45%, ctx=530, majf=0, minf=1 00:17:14.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.055 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.055 00:17:14.055 Run status group 0 (all jobs): 00:17:14.055 READ: bw=255KiB/s (261kB/s), 55.0KiB/s-79.1KiB/s (56.3kB/s-81.0kB/s), io=264KiB (270kB), run=1009-1035msec 00:17:14.055 WRITE: bw=7915KiB/s (8105kB/s), 1979KiB/s-2030KiB/s (2026kB/s-2078kB/s), io=8192KiB (8389kB), run=1009-1035msec 00:17:14.055 00:17:14.055 Disk stats (read/write): 00:17:14.055 nvme0n1: ios=65/512, merge=0/0, ticks=739/182, in_queue=921, util=96.39% 00:17:14.055 nvme0n2: ios=50/512, merge=0/0, ticks=939/383, in_queue=1322, util=98.47% 00:17:14.055 nvme0n3: ios=65/512, merge=0/0, ticks=1315/313, in_queue=1628, util=96.93% 00:17:14.055 nvme0n4: ios=13/512, merge=0/0, ticks=548/242, in_queue=790, util=89.51% 00:17:14.055 20:12:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:14.055 [global] 00:17:14.055 thread=1 00:17:14.055 invalidate=1 00:17:14.055 rw=randwrite 00:17:14.055 time_based=1 00:17:14.055 runtime=1 00:17:14.055 ioengine=libaio 00:17:14.055 direct=1 00:17:14.055 bs=4096 00:17:14.055 iodepth=1 00:17:14.055 norandommap=0 00:17:14.055 numjobs=1 00:17:14.055 00:17:14.055 verify_dump=1 00:17:14.055 verify_backlog=512 00:17:14.055 verify_state_save=0 00:17:14.055 do_verify=1 00:17:14.055 verify=crc32c-intel 00:17:14.055 [job0] 00:17:14.055 filename=/dev/nvme0n1 00:17:14.055 [job1] 00:17:14.055 filename=/dev/nvme0n2 00:17:14.055 [job2] 00:17:14.055 filename=/dev/nvme0n3 00:17:14.055 [job3] 00:17:14.055 filename=/dev/nvme0n4 00:17:14.055 Could not set queue depth (nvme0n1) 00:17:14.055 Could not set queue depth (nvme0n2) 00:17:14.055 Could not set queue depth (nvme0n3) 00:17:14.055 Could not set queue depth (nvme0n4) 00:17:14.322 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.322 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.322 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.322 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.322 fio-3.35 00:17:14.322 Starting 4 threads 00:17:15.735 00:17:15.735 job0: (groupid=0, jobs=1): err= 0: pid=970924: Mon Jul 15 20:12:12 2024 00:17:15.735 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:15.735 slat (nsec): min=6793, max=47253, avg=25715.62, stdev=4467.33 00:17:15.735 clat (usec): min=378, max=1220, avg=1003.06, stdev=192.89 00:17:15.735 lat (usec): min=404, max=1246, avg=1028.77, stdev=194.75 00:17:15.735 clat percentiles (usec): 00:17:15.735 | 1.00th=[ 498], 5.00th=[ 586], 10.00th=[ 627], 20.00th=[ 930], 00:17:15.735 | 30.00th=[ 1045], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1090], 00:17:15.735 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1139], 95.00th=[ 1172], 00:17:15.735 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1221], 99.95th=[ 1221], 00:17:15.735 | 99.99th=[ 1221] 00:17:15.735 write: IOPS=801, BW=3205KiB/s (3282kB/s)(3208KiB/1001msec); 0 zone resets 00:17:15.735 slat (usec): min=9, max=41985, avg=78.93, stdev=1481.67 00:17:15.735 clat (usec): min=139, max=4338, avg=495.11, stdev=192.62 00:17:15.735 lat (usec): min=150, max=42274, avg=574.04, stdev=1487.34 00:17:15.735 clat percentiles (usec): 00:17:15.735 | 1.00th=[ 212], 5.00th=[ 281], 10.00th=[ 302], 20.00th=[ 355], 00:17:15.735 | 30.00th=[ 408], 40.00th=[ 437], 50.00th=[ 482], 60.00th=[ 545], 00:17:15.735 | 70.00th=[ 611], 80.00th=[ 635], 90.00th=[ 660], 95.00th=[ 676], 00:17:15.735 | 99.00th=[ 750], 99.50th=[ 766], 99.90th=[ 4359], 99.95th=[ 4359], 00:17:15.735 | 99.99th=[ 4359] 00:17:15.735 bw ( KiB/s): min= 4096, max= 4096, per=36.74%, avg=4096.00, stdev= 0.00, samples=1 00:17:15.735 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:15.735 lat (usec) : 250=0.99%, 500=31.66%, 750=34.70%, 1000=2.21% 00:17:15.735 lat (msec) : 2=30.37%, 10=0.08% 00:17:15.735 cpu : usr=1.90%, sys=3.50%, ctx=1319, majf=0, minf=1 00:17:15.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:15.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.735 issued rwts: total=512,802,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:15.735 job1: (groupid=0, jobs=1): err= 0: pid=970937: Mon Jul 15 20:12:12 2024 00:17:15.735 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:15.735 slat (nsec): min=7040, max=54688, avg=24711.79, stdev=7602.66 00:17:15.735 clat (usec): min=476, max=1066, avg=806.37, stdev=93.41 00:17:15.735 lat (usec): min=484, max=1079, avg=831.08, stdev=96.00 00:17:15.735 clat percentiles (usec): 00:17:15.735 | 1.00th=[ 553], 5.00th=[ 644], 10.00th=[ 676], 20.00th=[ 734], 00:17:15.735 | 30.00th=[ 766], 40.00th=[ 791], 50.00th=[ 816], 60.00th=[ 840], 00:17:15.735 | 70.00th=[ 865], 80.00th=[ 889], 90.00th=[ 914], 95.00th=[ 938], 00:17:15.735 | 99.00th=[ 988], 99.50th=[ 1029], 99.90th=[ 1074], 99.95th=[ 1074], 00:17:15.735 | 99.99th=[ 1074] 00:17:15.735 write: IOPS=1004, BW=4020KiB/s (4116kB/s)(4024KiB/1001msec); 0 zone resets 00:17:15.735 slat (nsec): min=9297, max=61095, avg=27953.35, stdev=10452.96 00:17:15.735 clat (usec): min=159, max=4048, avg=528.58, stdev=211.38 00:17:15.735 lat (usec): min=173, max=4096, avg=556.53, stdev=212.53 00:17:15.735 clat percentiles (usec): 00:17:15.735 | 1.00th=[ 178], 5.00th=[ 255], 10.00th=[ 314], 20.00th=[ 404], 00:17:15.735 | 30.00th=[ 453], 40.00th=[ 490], 50.00th=[ 529], 60.00th=[ 562], 00:17:15.735 | 70.00th=[ 594], 80.00th=[ 644], 90.00th=[ 701], 95.00th=[ 775], 00:17:15.735 | 99.00th=[ 947], 99.50th=[ 1004], 99.90th=[ 3523], 99.95th=[ 4047], 00:17:15.735 | 99.99th=[ 4047] 00:17:15.735 bw ( KiB/s): min= 4096, max= 4096, per=36.74%, avg=4096.00, stdev= 0.00, samples=1 00:17:15.735 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:15.735 lat (usec) : 250=3.23%, 500=24.90%, 750=42.75%, 1000=28.46% 00:17:15.735 lat (msec) : 2=0.53%, 4=0.07%, 10=0.07% 00:17:15.735 cpu : usr=2.40%, sys=4.00%, ctx=1520, majf=0, minf=1 00:17:15.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:15.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.735 issued rwts: total=512,1006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:15.735 job2: (groupid=0, jobs=1): err= 0: pid=970957: Mon Jul 15 20:12:12 2024 00:17:15.735 read: IOPS=16, BW=66.9KiB/s (68.5kB/s)(68.0KiB/1016msec) 00:17:15.735 slat (nsec): min=27049, max=28267, avg=27482.53, stdev=336.12 00:17:15.735 clat (usec): min=40916, max=41815, avg=41013.38, stdev=208.35 00:17:15.735 lat (usec): min=40944, max=41842, avg=41040.87, stdev=208.23 00:17:15.735 clat percentiles (usec): 00:17:15.735 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:15.735 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:15.735 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:17:15.735 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:17:15.735 | 99.99th=[41681] 00:17:15.735 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:17:15.735 slat (nsec): min=8590, max=55238, avg=27355.03, stdev=10664.49 00:17:15.735 clat (usec): min=247, max=1626, avg=580.12, stdev=164.68 00:17:15.735 lat (usec): min=257, max=1637, avg=607.47, stdev=164.70 00:17:15.735 clat percentiles (usec): 00:17:15.735 | 1.00th=[ 285], 5.00th=[ 379], 10.00th=[ 420], 20.00th=[ 469], 00:17:15.735 | 30.00th=[ 510], 40.00th=[ 529], 50.00th=[ 545], 60.00th=[ 562], 00:17:15.735 | 70.00th=[ 586], 80.00th=[ 660], 90.00th=[ 848], 95.00th=[ 922], 00:17:15.735 | 99.00th=[ 1020], 99.50th=[ 1045], 99.90th=[ 1631], 99.95th=[ 1631], 00:17:15.735 | 99.99th=[ 1631] 00:17:15.735 bw ( KiB/s): min= 4096, max= 4096, per=36.74%, avg=4096.00, stdev= 0.00, samples=1 00:17:15.735 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:15.735 lat (usec) : 250=0.19%, 500=26.65%, 750=55.39%, 1000=13.23% 00:17:15.735 lat (msec) : 2=1.32%, 50=3.21% 00:17:15.735 cpu : usr=0.89%, sys=1.67%, ctx=530, majf=0, minf=1 00:17:15.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:15.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.735 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:15.735 job3: (groupid=0, jobs=1): err= 0: pid=970963: Mon Jul 15 20:12:12 2024 00:17:15.735 read: IOPS=12, BW=51.8KiB/s (53.1kB/s)(52.0KiB/1003msec) 00:17:15.735 slat (nsec): min=24566, max=25395, avg=24881.08, stdev=229.02 00:17:15.735 clat (usec): min=41769, max=42150, avg=41939.82, stdev=124.15 00:17:15.735 lat (usec): min=41794, max=42175, avg=41964.70, stdev=124.13 00:17:15.735 clat percentiles (usec): 00:17:15.735 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:15.735 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:17:15.735 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:15.735 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:15.735 | 99.99th=[42206] 00:17:15.735 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:17:15.735 slat (nsec): min=9891, max=59329, avg=30962.75, stdev=6966.99 00:17:15.735 clat (usec): min=450, max=1932, avg=854.79, stdev=124.50 00:17:15.735 lat (usec): min=482, max=1965, avg=885.75, stdev=125.95 00:17:15.735 clat percentiles (usec): 00:17:15.735 | 1.00th=[ 562], 5.00th=[ 652], 10.00th=[ 701], 20.00th=[ 758], 00:17:15.735 | 30.00th=[ 807], 40.00th=[ 832], 50.00th=[ 857], 60.00th=[ 889], 00:17:15.735 | 70.00th=[ 914], 80.00th=[ 947], 90.00th=[ 1004], 95.00th=[ 1037], 00:17:15.735 | 99.00th=[ 1090], 99.50th=[ 1139], 99.90th=[ 1926], 99.95th=[ 1926], 00:17:15.735 | 99.99th=[ 1926] 00:17:15.735 bw ( KiB/s): min= 4096, max= 4096, per=36.74%, avg=4096.00, stdev= 0.00, samples=1 00:17:15.735 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:15.735 lat (usec) : 500=0.19%, 750=16.76%, 1000=70.29% 00:17:15.735 lat (msec) : 2=10.29%, 50=2.48% 00:17:15.735 cpu : usr=0.80%, sys=1.50%, ctx=525, majf=0, minf=1 00:17:15.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:15.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.735 issued rwts: total=13,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:15.735 00:17:15.735 Run status group 0 (all jobs): 00:17:15.735 READ: bw=4150KiB/s (4249kB/s), 51.8KiB/s-2046KiB/s (53.1kB/s-2095kB/s), io=4216KiB (4317kB), run=1001-1016msec 00:17:15.735 WRITE: bw=10.9MiB/s (11.4MB/s), 2016KiB/s-4020KiB/s (2064kB/s-4116kB/s), io=11.1MiB (11.6MB), run=1001-1016msec 00:17:15.735 00:17:15.735 Disk stats (read/write): 00:17:15.735 nvme0n1: ios=564/552, merge=0/0, ticks=1072/237, in_queue=1309, util=96.69% 00:17:15.735 nvme0n2: ios=540/727, merge=0/0, ticks=979/348, in_queue=1327, util=97.25% 00:17:15.735 nvme0n3: ios=46/512, merge=0/0, ticks=1213/262, in_queue=1475, util=96.41% 00:17:15.735 nvme0n4: ios=9/512, merge=0/0, ticks=378/407, in_queue=785, util=89.54% 00:17:15.735 20:12:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:15.735 [global] 00:17:15.735 thread=1 00:17:15.735 invalidate=1 00:17:15.735 rw=write 00:17:15.735 time_based=1 00:17:15.735 runtime=1 00:17:15.735 ioengine=libaio 00:17:15.735 direct=1 00:17:15.735 bs=4096 00:17:15.735 iodepth=128 00:17:15.735 norandommap=0 00:17:15.735 numjobs=1 00:17:15.735 00:17:15.735 verify_dump=1 00:17:15.735 verify_backlog=512 00:17:15.735 verify_state_save=0 00:17:15.735 do_verify=1 00:17:15.735 verify=crc32c-intel 00:17:15.735 [job0] 00:17:15.735 filename=/dev/nvme0n1 00:17:15.735 [job1] 00:17:15.735 filename=/dev/nvme0n2 00:17:15.735 [job2] 00:17:15.735 filename=/dev/nvme0n3 00:17:15.735 [job3] 00:17:15.735 filename=/dev/nvme0n4 00:17:15.735 Could not set queue depth (nvme0n1) 00:17:15.735 Could not set queue depth (nvme0n2) 00:17:15.735 Could not set queue depth (nvme0n3) 00:17:15.735 Could not set queue depth (nvme0n4) 00:17:16.058 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:16.058 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:16.058 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:16.058 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:16.058 fio-3.35 00:17:16.058 Starting 4 threads 00:17:17.446 00:17:17.446 job0: (groupid=0, jobs=1): err= 0: pid=971459: Mon Jul 15 20:12:14 2024 00:17:17.446 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:17:17.446 slat (nsec): min=853, max=7610.0k, avg=74031.84, stdev=458542.16 00:17:17.446 clat (usec): min=2087, max=25183, avg=10116.80, stdev=3408.01 00:17:17.446 lat (usec): min=2092, max=25187, avg=10190.83, stdev=3433.06 00:17:17.446 clat percentiles (usec): 00:17:17.446 | 1.00th=[ 5342], 5.00th=[ 6652], 10.00th=[ 7308], 20.00th=[ 7963], 00:17:17.446 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9634], 00:17:17.446 | 70.00th=[10421], 80.00th=[11469], 90.00th=[14746], 95.00th=[18744], 00:17:17.446 | 99.00th=[21103], 99.50th=[22152], 99.90th=[24511], 99.95th=[25035], 00:17:17.446 | 99.99th=[25297] 00:17:17.446 write: IOPS=5954, BW=23.3MiB/s (24.4MB/s)(23.3MiB/1002msec); 0 zone resets 00:17:17.446 slat (nsec): min=1515, max=19044k, avg=85851.42, stdev=629254.56 00:17:17.446 clat (usec): min=1056, max=55562, avg=11577.08, stdev=8343.83 00:17:17.446 lat (usec): min=1064, max=55573, avg=11662.94, stdev=8392.89 00:17:17.446 clat percentiles (usec): 00:17:17.446 | 1.00th=[ 2999], 5.00th=[ 5080], 10.00th=[ 5997], 20.00th=[ 7242], 00:17:17.446 | 30.00th=[ 7635], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9372], 00:17:17.446 | 70.00th=[12125], 80.00th=[14222], 90.00th=[17433], 95.00th=[28443], 00:17:17.446 | 99.00th=[54264], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:17:17.446 | 99.99th=[55313] 00:17:17.446 bw ( KiB/s): min=18725, max=28024, per=26.94%, avg=23374.50, stdev=6575.39, samples=2 00:17:17.446 iops : min= 4681, max= 7006, avg=5843.50, stdev=1644.02, samples=2 00:17:17.446 lat (msec) : 2=0.34%, 4=0.78%, 10=63.16%, 20=30.88%, 50=4.32% 00:17:17.446 lat (msec) : 100=0.53% 00:17:17.446 cpu : usr=4.00%, sys=4.70%, ctx=583, majf=0, minf=1 00:17:17.446 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:17.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:17.446 issued rwts: total=5632,5966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:17.446 job1: (groupid=0, jobs=1): err= 0: pid=971466: Mon Jul 15 20:12:14 2024 00:17:17.446 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:17:17.446 slat (nsec): min=864, max=18322k, avg=108157.75, stdev=725112.51 00:17:17.446 clat (usec): min=5793, max=53320, avg=13648.35, stdev=8774.59 00:17:17.446 lat (usec): min=6270, max=53324, avg=13756.51, stdev=8819.14 00:17:17.446 clat percentiles (usec): 00:17:17.446 | 1.00th=[ 7242], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9765], 00:17:17.446 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11207], 60.00th=[11600], 00:17:17.446 | 70.00th=[11994], 80.00th=[12518], 90.00th=[21627], 95.00th=[34341], 00:17:17.446 | 99.00th=[52167], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:17:17.446 | 99.99th=[53216] 00:17:17.446 write: IOPS=5494, BW=21.5MiB/s (22.5MB/s)(21.5MiB/1002msec); 0 zone resets 00:17:17.446 slat (nsec): min=1487, max=8262.6k, avg=76878.31, stdev=410187.30 00:17:17.446 clat (usec): min=965, max=34457, avg=10373.11, stdev=4082.19 00:17:17.446 lat (usec): min=986, max=34467, avg=10449.99, stdev=4081.51 00:17:17.446 clat percentiles (usec): 00:17:17.446 | 1.00th=[ 3163], 5.00th=[ 6980], 10.00th=[ 7504], 20.00th=[ 8455], 00:17:17.446 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9503], 00:17:17.446 | 70.00th=[10421], 80.00th=[11863], 90.00th=[13566], 95.00th=[19530], 00:17:17.446 | 99.00th=[28967], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:17:17.446 | 99.99th=[34341] 00:17:17.446 bw ( KiB/s): min=18448, max=24576, per=24.79%, avg=21512.00, stdev=4333.15, samples=2 00:17:17.446 iops : min= 4612, max= 6144, avg=5378.00, stdev=1083.29, samples=2 00:17:17.446 lat (usec) : 1000=0.03% 00:17:17.446 lat (msec) : 2=0.04%, 4=0.70%, 10=46.08%, 20=46.00%, 50=6.28% 00:17:17.446 lat (msec) : 100=0.88% 00:17:17.446 cpu : usr=2.80%, sys=4.40%, ctx=489, majf=0, minf=1 00:17:17.446 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:17.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:17.446 issued rwts: total=5120,5505,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:17.446 job2: (groupid=0, jobs=1): err= 0: pid=971479: Mon Jul 15 20:12:14 2024 00:17:17.446 read: IOPS=4967, BW=19.4MiB/s (20.3MB/s)(19.5MiB/1004msec) 00:17:17.446 slat (nsec): min=937, max=12254k, avg=98606.56, stdev=699607.16 00:17:17.446 clat (usec): min=2704, max=35211, avg=12009.78, stdev=3737.68 00:17:17.446 lat (usec): min=4405, max=35220, avg=12108.39, stdev=3789.99 00:17:17.446 clat percentiles (usec): 00:17:17.446 | 1.00th=[ 6194], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9634], 00:17:17.446 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11076], 60.00th=[11600], 00:17:17.446 | 70.00th=[12387], 80.00th=[13304], 90.00th=[16909], 95.00th=[19006], 00:17:17.446 | 99.00th=[25560], 99.50th=[29754], 99.90th=[32900], 99.95th=[33162], 00:17:17.446 | 99.99th=[35390] 00:17:17.446 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:17:17.446 slat (nsec): min=1667, max=9649.4k, avg=92692.26, stdev=471054.84 00:17:17.446 clat (usec): min=866, max=43596, avg=13100.26, stdev=7161.40 00:17:17.446 lat (usec): min=875, max=43602, avg=13192.95, stdev=7206.50 00:17:17.446 clat percentiles (usec): 00:17:17.446 | 1.00th=[ 3261], 5.00th=[ 5407], 10.00th=[ 6587], 20.00th=[ 8455], 00:17:17.446 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:17:17.446 | 70.00th=[13435], 80.00th=[18482], 90.00th=[22152], 95.00th=[28705], 00:17:17.446 | 99.00th=[40109], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:17:17.446 | 99.99th=[43779] 00:17:17.446 bw ( KiB/s): min=20480, max=20521, per=23.63%, avg=20500.50, stdev=28.99, samples=2 00:17:17.446 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:17:17.446 lat (usec) : 1000=0.03% 00:17:17.446 lat (msec) : 2=0.06%, 4=0.89%, 10=30.26%, 20=60.76%, 50=8.00% 00:17:17.446 cpu : usr=3.89%, sys=4.89%, ctx=595, majf=0, minf=1 00:17:17.446 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:17.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:17.446 issued rwts: total=4987,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:17.446 job3: (groupid=0, jobs=1): err= 0: pid=971487: Mon Jul 15 20:12:14 2024 00:17:17.446 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:17:17.446 slat (nsec): min=880, max=54385k, avg=105219.37, stdev=1065184.50 00:17:17.446 clat (usec): min=4856, max=76361, avg=14336.30, stdev=13929.18 00:17:17.446 lat (usec): min=4859, max=76368, avg=14441.52, stdev=14009.17 00:17:17.446 clat percentiles (usec): 00:17:17.446 | 1.00th=[ 6128], 5.00th=[ 6980], 10.00th=[ 7963], 20.00th=[ 8225], 00:17:17.446 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[10028], 00:17:17.446 | 70.00th=[11600], 80.00th=[14746], 90.00th=[22152], 95.00th=[50070], 00:17:17.446 | 99.00th=[70779], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:17:17.446 | 99.99th=[76022] 00:17:17.446 write: IOPS=5184, BW=20.2MiB/s (21.2MB/s)(20.4MiB/1005msec); 0 zone resets 00:17:17.446 slat (nsec): min=1538, max=14404k, avg=84349.22, stdev=507662.97 00:17:17.446 clat (usec): min=2564, max=49647, avg=10328.58, stdev=4878.66 00:17:17.446 lat (usec): min=3982, max=49660, avg=10412.93, stdev=4923.89 00:17:17.446 clat percentiles (usec): 00:17:17.446 | 1.00th=[ 4948], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 7504], 00:17:17.446 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 9110], 60.00th=[ 9896], 00:17:17.446 | 70.00th=[10814], 80.00th=[12518], 90.00th=[14615], 95.00th=[16188], 00:17:17.446 | 99.00th=[26608], 99.50th=[49546], 99.90th=[49546], 99.95th=[49546], 00:17:17.446 | 99.99th=[49546] 00:17:17.446 bw ( KiB/s): min=16384, max=24576, per=23.60%, avg=20480.00, stdev=5792.62, samples=2 00:17:17.446 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:17:17.446 lat (msec) : 4=0.02%, 10=60.46%, 20=32.27%, 50=4.40%, 100=2.84% 00:17:17.446 cpu : usr=3.49%, sys=4.18%, ctx=566, majf=0, minf=1 00:17:17.446 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:17.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:17.446 issued rwts: total=5120,5210,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:17.446 00:17:17.446 Run status group 0 (all jobs): 00:17:17.446 READ: bw=81.1MiB/s (85.0MB/s), 19.4MiB/s-22.0MiB/s (20.3MB/s-23.0MB/s), io=81.5MiB (85.4MB), run=1002-1005msec 00:17:17.446 WRITE: bw=84.7MiB/s (88.9MB/s), 19.9MiB/s-23.3MiB/s (20.9MB/s-24.4MB/s), io=85.2MiB (89.3MB), run=1002-1005msec 00:17:17.446 00:17:17.446 Disk stats (read/write): 00:17:17.446 nvme0n1: ios=4606/4608, merge=0/0, ticks=28694/34727, in_queue=63421, util=93.39% 00:17:17.446 nvme0n2: ios=4131/4608, merge=0/0, ticks=18019/14430, in_queue=32449, util=88.58% 00:17:17.446 nvme0n3: ios=4139/4335, merge=0/0, ticks=47047/54391, in_queue=101438, util=100.00% 00:17:17.446 nvme0n4: ios=4608/5072, merge=0/0, ticks=26302/23140, in_queue=49442, util=89.34% 00:17:17.446 20:12:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:17.446 [global] 00:17:17.446 thread=1 00:17:17.446 invalidate=1 00:17:17.446 rw=randwrite 00:17:17.446 time_based=1 00:17:17.446 runtime=1 00:17:17.446 ioengine=libaio 00:17:17.446 direct=1 00:17:17.446 bs=4096 00:17:17.446 iodepth=128 00:17:17.446 norandommap=0 00:17:17.446 numjobs=1 00:17:17.446 00:17:17.446 verify_dump=1 00:17:17.446 verify_backlog=512 00:17:17.446 verify_state_save=0 00:17:17.446 do_verify=1 00:17:17.446 verify=crc32c-intel 00:17:17.446 [job0] 00:17:17.446 filename=/dev/nvme0n1 00:17:17.446 [job1] 00:17:17.446 filename=/dev/nvme0n2 00:17:17.446 [job2] 00:17:17.446 filename=/dev/nvme0n3 00:17:17.446 [job3] 00:17:17.446 filename=/dev/nvme0n4 00:17:17.446 Could not set queue depth (nvme0n1) 00:17:17.446 Could not set queue depth (nvme0n2) 00:17:17.446 Could not set queue depth (nvme0n3) 00:17:17.446 Could not set queue depth (nvme0n4) 00:17:17.705 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:17.705 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:17.705 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:17.705 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:17.705 fio-3.35 00:17:17.705 Starting 4 threads 00:17:19.090 00:17:19.090 job0: (groupid=0, jobs=1): err= 0: pid=971982: Mon Jul 15 20:12:16 2024 00:17:19.090 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:17:19.090 slat (nsec): min=938, max=15061k, avg=87494.35, stdev=668692.70 00:17:19.090 clat (usec): min=4985, max=28098, avg=11605.80, stdev=3839.48 00:17:19.090 lat (usec): min=5012, max=30283, avg=11693.29, stdev=3891.78 00:17:19.090 clat percentiles (usec): 00:17:19.090 | 1.00th=[ 5604], 5.00th=[ 7963], 10.00th=[ 8455], 20.00th=[ 9110], 00:17:19.090 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10421], 60.00th=[10945], 00:17:19.090 | 70.00th=[12125], 80.00th=[13566], 90.00th=[17433], 95.00th=[19792], 00:17:19.090 | 99.00th=[23987], 99.50th=[25035], 99.90th=[27395], 99.95th=[27395], 00:17:19.090 | 99.99th=[28181] 00:17:19.090 write: IOPS=5579, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:17:19.090 slat (nsec): min=1547, max=13798k, avg=90155.28, stdev=575547.73 00:17:19.090 clat (usec): min=1284, max=51343, avg=12156.74, stdev=8084.78 00:17:19.090 lat (usec): min=1294, max=51352, avg=12246.90, stdev=8141.88 00:17:19.090 clat percentiles (usec): 00:17:19.090 | 1.00th=[ 4359], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 7701], 00:17:19.090 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[10683], 00:17:19.090 | 70.00th=[11994], 80.00th=[14615], 90.00th=[19530], 95.00th=[27919], 00:17:19.090 | 99.00th=[49021], 99.50th=[49546], 99.90th=[51119], 99.95th=[51119], 00:17:19.090 | 99.99th=[51119] 00:17:19.090 bw ( KiB/s): min=17336, max=26688, per=23.08%, avg=22012.00, stdev=6612.86, samples=2 00:17:19.090 iops : min= 4334, max= 6672, avg=5503.00, stdev=1653.22, samples=2 00:17:19.090 lat (msec) : 2=0.03%, 4=0.39%, 10=50.83%, 20=41.67%, 50=6.89% 00:17:19.090 lat (msec) : 100=0.20% 00:17:19.090 cpu : usr=3.37%, sys=5.75%, ctx=473, majf=0, minf=1 00:17:19.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:19.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.090 issued rwts: total=5120,5630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.090 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.090 job1: (groupid=0, jobs=1): err= 0: pid=971989: Mon Jul 15 20:12:16 2024 00:17:19.090 read: IOPS=5327, BW=20.8MiB/s (21.8MB/s)(20.9MiB/1006msec) 00:17:19.090 slat (nsec): min=908, max=10483k, avg=85013.51, stdev=584988.13 00:17:19.090 clat (usec): min=2402, max=27704, avg=11344.26, stdev=3546.50 00:17:19.090 lat (usec): min=3916, max=27733, avg=11429.28, stdev=3576.91 00:17:19.090 clat percentiles (usec): 00:17:19.090 | 1.00th=[ 5407], 5.00th=[ 6325], 10.00th=[ 7439], 20.00th=[ 8356], 00:17:19.090 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[11076], 60.00th=[11863], 00:17:19.090 | 70.00th=[12780], 80.00th=[14222], 90.00th=[16450], 95.00th=[18220], 00:17:19.090 | 99.00th=[21103], 99.50th=[21103], 99.90th=[22938], 99.95th=[22938], 00:17:19.090 | 99.99th=[27657] 00:17:19.090 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:17:19.090 slat (nsec): min=1523, max=14451k, avg=90686.04, stdev=489899.31 00:17:19.090 clat (usec): min=1211, max=33844, avg=11854.60, stdev=5980.13 00:17:19.090 lat (usec): min=1223, max=33848, avg=11945.28, stdev=6013.00 00:17:19.090 clat percentiles (usec): 00:17:19.090 | 1.00th=[ 3752], 5.00th=[ 5211], 10.00th=[ 5866], 20.00th=[ 6849], 00:17:19.090 | 30.00th=[ 7767], 40.00th=[ 9110], 50.00th=[10552], 60.00th=[11994], 00:17:19.090 | 70.00th=[13435], 80.00th=[15795], 90.00th=[20841], 95.00th=[24249], 00:17:19.090 | 99.00th=[30016], 99.50th=[32637], 99.90th=[33817], 99.95th=[33817], 00:17:19.090 | 99.99th=[33817] 00:17:19.090 bw ( KiB/s): min=21736, max=23320, per=23.62%, avg=22528.00, stdev=1120.06, samples=2 00:17:19.090 iops : min= 5434, max= 5830, avg=5632.00, stdev=280.01, samples=2 00:17:19.090 lat (msec) : 2=0.02%, 4=0.63%, 10=43.27%, 20=49.10%, 50=6.98% 00:17:19.090 cpu : usr=3.18%, sys=5.67%, ctx=501, majf=0, minf=1 00:17:19.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:19.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.090 issued rwts: total=5359,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.090 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.090 job2: (groupid=0, jobs=1): err= 0: pid=971999: Mon Jul 15 20:12:16 2024 00:17:19.090 read: IOPS=7170, BW=28.0MiB/s (29.4MB/s)(28.2MiB/1007msec) 00:17:19.090 slat (nsec): min=996, max=8293.1k, avg=68362.21, stdev=484182.72 00:17:19.090 clat (usec): min=3516, max=20653, avg=9029.32, stdev=2426.70 00:17:19.090 lat (usec): min=3521, max=20657, avg=9097.68, stdev=2443.13 00:17:19.090 clat percentiles (usec): 00:17:19.090 | 1.00th=[ 5145], 5.00th=[ 6063], 10.00th=[ 6521], 20.00th=[ 7046], 00:17:19.090 | 30.00th=[ 7504], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9241], 00:17:19.090 | 70.00th=[ 9765], 80.00th=[10814], 90.00th=[12256], 95.00th=[13304], 00:17:19.090 | 99.00th=[16712], 99.50th=[18482], 99.90th=[19792], 99.95th=[20579], 00:17:19.090 | 99.99th=[20579] 00:17:19.090 write: IOPS=7626, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1007msec); 0 zone resets 00:17:19.090 slat (nsec): min=1582, max=7268.8k, avg=61804.73, stdev=374309.40 00:17:19.090 clat (usec): min=1806, max=22293, avg=8125.49, stdev=2938.11 00:17:19.090 lat (usec): min=2070, max=22296, avg=8187.29, stdev=2948.48 00:17:19.090 clat percentiles (usec): 00:17:19.090 | 1.00th=[ 3130], 5.00th=[ 4359], 10.00th=[ 5080], 20.00th=[ 5997], 00:17:19.090 | 30.00th=[ 6718], 40.00th=[ 7177], 50.00th=[ 7635], 60.00th=[ 7898], 00:17:19.090 | 70.00th=[ 8586], 80.00th=[10290], 90.00th=[11863], 95.00th=[14091], 00:17:19.090 | 99.00th=[18482], 99.50th=[19530], 99.90th=[20841], 99.95th=[20841], 00:17:19.090 | 99.99th=[22414] 00:17:19.090 bw ( KiB/s): min=30128, max=30712, per=31.89%, avg=30420.00, stdev=412.95, samples=2 00:17:19.090 iops : min= 7532, max= 7678, avg=7605.00, stdev=103.24, samples=2 00:17:19.090 lat (msec) : 2=0.01%, 4=1.92%, 10=73.06%, 20=24.85%, 50=0.16% 00:17:19.090 cpu : usr=4.57%, sys=5.96%, ctx=623, majf=0, minf=1 00:17:19.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:19.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.090 issued rwts: total=7221,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.090 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.090 job3: (groupid=0, jobs=1): err= 0: pid=972005: Mon Jul 15 20:12:16 2024 00:17:19.090 read: IOPS=4617, BW=18.0MiB/s (18.9MB/s)(18.2MiB/1008msec) 00:17:19.090 slat (nsec): min=976, max=19217k, avg=108124.85, stdev=796281.77 00:17:19.090 clat (usec): min=2851, max=41790, avg=13834.70, stdev=6623.35 00:17:19.090 lat (usec): min=5050, max=41821, avg=13942.83, stdev=6687.59 00:17:19.090 clat percentiles (usec): 00:17:19.090 | 1.00th=[ 5211], 5.00th=[ 7177], 10.00th=[ 8094], 20.00th=[ 9241], 00:17:19.090 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11076], 60.00th=[12256], 00:17:19.090 | 70.00th=[14091], 80.00th=[19268], 90.00th=[24511], 95.00th=[28443], 00:17:19.090 | 99.00th=[33424], 99.50th=[33817], 99.90th=[34341], 99.95th=[38536], 00:17:19.090 | 99.99th=[41681] 00:17:19.090 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:17:19.090 slat (nsec): min=1590, max=10321k, avg=92861.46, stdev=547788.39 00:17:19.090 clat (usec): min=888, max=36247, avg=12371.17, stdev=5768.08 00:17:19.090 lat (usec): min=897, max=36253, avg=12464.03, stdev=5803.62 00:17:19.090 clat percentiles (usec): 00:17:19.091 | 1.00th=[ 3785], 5.00th=[ 5145], 10.00th=[ 6718], 20.00th=[ 8160], 00:17:19.091 | 30.00th=[ 8979], 40.00th=[10159], 50.00th=[11207], 60.00th=[12387], 00:17:19.091 | 70.00th=[13698], 80.00th=[15139], 90.00th=[19268], 95.00th=[26346], 00:17:19.091 | 99.00th=[32375], 99.50th=[33817], 99.90th=[36439], 99.95th=[36439], 00:17:19.091 | 99.99th=[36439] 00:17:19.091 bw ( KiB/s): min=20144, max=20160, per=21.13%, avg=20152.00, stdev=11.31, samples=2 00:17:19.091 iops : min= 5036, max= 5040, avg=5038.00, stdev= 2.83, samples=2 00:17:19.091 lat (usec) : 1000=0.03% 00:17:19.091 lat (msec) : 2=0.14%, 4=0.67%, 10=35.27%, 20=49.50%, 50=14.40% 00:17:19.091 cpu : usr=2.78%, sys=5.56%, ctx=435, majf=0, minf=1 00:17:19.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:19.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.091 issued rwts: total=4654,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.091 00:17:19.091 Run status group 0 (all jobs): 00:17:19.091 READ: bw=86.5MiB/s (90.7MB/s), 18.0MiB/s-28.0MiB/s (18.9MB/s-29.4MB/s), io=87.3MiB (91.6MB), run=1006-1009msec 00:17:19.091 WRITE: bw=93.2MiB/s (97.7MB/s), 19.8MiB/s-29.8MiB/s (20.8MB/s-31.2MB/s), io=94.0MiB (98.6MB), run=1006-1009msec 00:17:19.091 00:17:19.091 Disk stats (read/write): 00:17:19.091 nvme0n1: ios=4133/4583, merge=0/0, ticks=45444/52613, in_queue=98057, util=99.40% 00:17:19.091 nvme0n2: ios=4130/4487, merge=0/0, ticks=48033/55227, in_queue=103260, util=88.28% 00:17:19.091 nvme0n3: ios=6173/6624, merge=0/0, ticks=53598/49680, in_queue=103278, util=95.79% 00:17:19.091 nvme0n4: ios=3614/3917, merge=0/0, ticks=34202/35524, in_queue=69726, util=96.69% 00:17:19.091 20:12:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:19.091 20:12:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=972301 00:17:19.091 20:12:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:19.091 20:12:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:19.091 [global] 00:17:19.091 thread=1 00:17:19.091 invalidate=1 00:17:19.091 rw=read 00:17:19.091 time_based=1 00:17:19.091 runtime=10 00:17:19.091 ioengine=libaio 00:17:19.091 direct=1 00:17:19.091 bs=4096 00:17:19.091 iodepth=1 00:17:19.091 norandommap=1 00:17:19.091 numjobs=1 00:17:19.091 00:17:19.091 [job0] 00:17:19.091 filename=/dev/nvme0n1 00:17:19.091 [job1] 00:17:19.091 filename=/dev/nvme0n2 00:17:19.091 [job2] 00:17:19.091 filename=/dev/nvme0n3 00:17:19.091 [job3] 00:17:19.091 filename=/dev/nvme0n4 00:17:19.091 Could not set queue depth (nvme0n1) 00:17:19.091 Could not set queue depth (nvme0n2) 00:17:19.091 Could not set queue depth (nvme0n3) 00:17:19.091 Could not set queue depth (nvme0n4) 00:17:19.351 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:19.351 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:19.351 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:19.351 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:19.351 fio-3.35 00:17:19.351 Starting 4 threads 00:17:21.924 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:21.924 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:21.924 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=266240, buflen=4096 00:17:21.924 fio: pid=972506, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:22.186 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:22.186 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:22.186 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=8015872, buflen=4096 00:17:22.186 fio: pid=972502, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:22.447 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:22.447 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:22.447 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=286720, buflen=4096 00:17:22.447 fio: pid=972488, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:22.447 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:22.447 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:22.447 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=958464, buflen=4096 00:17:22.447 fio: pid=972493, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:22.706 00:17:22.706 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=972488: Mon Jul 15 20:12:19 2024 00:17:22.706 read: IOPS=24, BW=95.7KiB/s (98.0kB/s)(280KiB/2925msec) 00:17:22.706 slat (usec): min=24, max=15620, avg=267.34, stdev=1857.42 00:17:22.706 clat (usec): min=1031, max=42217, avg=41203.46, stdev=4886.50 00:17:22.706 lat (usec): min=1066, max=42963, avg=41251.47, stdev=4889.82 00:17:22.706 clat percentiles (usec): 00:17:22.706 | 1.00th=[ 1029], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:22.706 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:22.706 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:22.706 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:22.706 | 99.99th=[42206] 00:17:22.706 bw ( KiB/s): min= 96, max= 96, per=3.21%, avg=96.00, stdev= 0.00, samples=5 00:17:22.706 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:17:22.706 lat (msec) : 2=1.41%, 50=97.18% 00:17:22.706 cpu : usr=0.10%, sys=0.00%, ctx=76, majf=0, minf=1 00:17:22.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:22.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.706 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.706 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:22.706 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=972493: Mon Jul 15 20:12:19 2024 00:17:22.706 read: IOPS=75, BW=300KiB/s (308kB/s)(936KiB/3115msec) 00:17:22.706 slat (nsec): min=1971, max=22037k, avg=251360.60, stdev=1868741.50 00:17:22.706 clat (usec): min=956, max=42196, avg=12959.91, stdev=18395.44 00:17:22.706 lat (usec): min=965, max=50775, avg=13212.23, stdev=18513.04 00:17:22.706 clat percentiles (usec): 00:17:22.706 | 1.00th=[ 1045], 5.00th=[ 1156], 10.00th=[ 1237], 20.00th=[ 1287], 00:17:22.706 | 30.00th=[ 1319], 40.00th=[ 1336], 50.00th=[ 1352], 60.00th=[ 1401], 00:17:22.706 | 70.00th=[ 1680], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:17:22.706 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:22.706 | 99.99th=[42206] 00:17:22.706 bw ( KiB/s): min= 96, max= 965, per=9.47%, avg=283.50, stdev=336.52, samples=6 00:17:22.707 iops : min= 24, max= 241, avg=70.83, stdev=84.03, samples=6 00:17:22.707 lat (usec) : 1000=0.85% 00:17:22.707 lat (msec) : 2=69.79%, 4=0.43%, 50=28.51% 00:17:22.707 cpu : usr=0.06%, sys=0.22%, ctx=240, majf=0, minf=1 00:17:22.707 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:22.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.707 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.707 issued rwts: total=235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.707 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:22.707 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=972502: Mon Jul 15 20:12:19 2024 00:17:22.707 read: IOPS=710, BW=2841KiB/s (2910kB/s)(7828KiB/2755msec) 00:17:22.707 slat (nsec): min=24519, max=65465, avg=26901.24, stdev=4068.28 00:17:22.707 clat (usec): min=929, max=42087, avg=1363.12, stdev=1841.26 00:17:22.707 lat (usec): min=956, max=42112, avg=1390.02, stdev=1841.19 00:17:22.707 clat percentiles (usec): 00:17:22.707 | 1.00th=[ 1090], 5.00th=[ 1172], 10.00th=[ 1205], 20.00th=[ 1237], 00:17:22.707 | 30.00th=[ 1254], 40.00th=[ 1270], 50.00th=[ 1287], 60.00th=[ 1287], 00:17:22.707 | 70.00th=[ 1303], 80.00th=[ 1319], 90.00th=[ 1352], 95.00th=[ 1385], 00:17:22.707 | 99.00th=[ 1467], 99.50th=[ 1516], 99.90th=[42206], 99.95th=[42206], 00:17:22.707 | 99.99th=[42206] 00:17:22.707 bw ( KiB/s): min= 2040, max= 3088, per=95.49%, avg=2852.80, stdev=455.04, samples=5 00:17:22.707 iops : min= 510, max= 772, avg=713.20, stdev=113.76, samples=5 00:17:22.707 lat (usec) : 1000=0.10% 00:17:22.707 lat (msec) : 2=99.64%, 50=0.20% 00:17:22.707 cpu : usr=1.02%, sys=3.16%, ctx=1959, majf=0, minf=1 00:17:22.707 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:22.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.707 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.707 issued rwts: total=1958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.707 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:22.707 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=972506: Mon Jul 15 20:12:19 2024 00:17:22.707 read: IOPS=25, BW=98.7KiB/s (101kB/s)(260KiB/2635msec) 00:17:22.707 slat (nsec): min=24909, max=53303, avg=26537.98, stdev=3427.26 00:17:22.707 clat (usec): min=933, max=42255, avg=40095.67, stdev=8587.88 00:17:22.707 lat (usec): min=987, max=42281, avg=40122.22, stdev=8585.97 00:17:22.707 clat percentiles (usec): 00:17:22.707 | 1.00th=[ 938], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:17:22.707 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:22.707 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:22.707 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:22.707 | 99.99th=[42206] 00:17:22.707 bw ( KiB/s): min= 96, max= 104, per=3.31%, avg=99.20, stdev= 4.38, samples=5 00:17:22.707 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:17:22.707 lat (usec) : 1000=1.52% 00:17:22.707 lat (msec) : 2=3.03%, 50=93.94% 00:17:22.707 cpu : usr=0.00%, sys=0.15%, ctx=66, majf=0, minf=2 00:17:22.707 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:22.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.707 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.707 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.707 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:22.707 00:17:22.707 Run status group 0 (all jobs): 00:17:22.707 READ: bw=2987KiB/s (3059kB/s), 95.7KiB/s-2841KiB/s (98.0kB/s-2910kB/s), io=9304KiB (9527kB), run=2635-3115msec 00:17:22.707 00:17:22.707 Disk stats (read/write): 00:17:22.707 nvme0n1: ios=97/0, merge=0/0, ticks=3592/0, in_queue=3592, util=99.17% 00:17:22.707 nvme0n2: ios=258/0, merge=0/0, ticks=3177/0, in_queue=3177, util=98.02% 00:17:22.707 nvme0n3: ios=1876/0, merge=0/0, ticks=2884/0, in_queue=2884, util=98.70% 00:17:22.707 nvme0n4: ios=64/0, merge=0/0, ticks=2566/0, in_queue=2566, util=96.42% 00:17:22.707 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:22.707 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:22.966 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:22.966 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:22.966 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:22.966 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:23.225 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:23.225 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:23.486 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:23.486 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 972301 00:17:23.486 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:23.486 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:23.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.486 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:23.486 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:23.486 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:23.486 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.486 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:23.486 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.486 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:23.486 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:23.486 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:23.486 nvmf hotplug test: fio failed as expected 00:17:23.486 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.746 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:23.746 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:23.746 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:23.746 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:23.746 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:23.746 20:12:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:23.746 20:12:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:23.746 20:12:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:23.746 20:12:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:23.746 20:12:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:23.746 20:12:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:23.746 rmmod nvme_tcp 00:17:23.746 rmmod nvme_fabrics 00:17:23.746 rmmod nvme_keyring 00:17:23.746 20:12:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:23.746 20:12:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:23.746 20:12:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:23.746 20:12:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 968330 ']' 00:17:23.746 20:12:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 968330 00:17:23.746 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 968330 ']' 00:17:23.746 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 968330 00:17:23.746 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:23.746 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:23.746 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 968330 00:17:23.746 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:23.746 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:23.746 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 968330' 00:17:23.746 killing process with pid 968330 00:17:23.746 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 968330 00:17:23.746 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 968330 00:17:24.006 20:12:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:24.006 20:12:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:24.006 20:12:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:24.006 20:12:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:24.006 20:12:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:24.006 20:12:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.006 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.006 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.919 20:12:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:25.919 00:17:25.919 real 0m28.299s 00:17:25.919 user 2m31.990s 00:17:25.919 sys 0m8.793s 00:17:25.919 20:12:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:25.919 20:12:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.919 ************************************ 00:17:25.919 END TEST nvmf_fio_target 00:17:25.919 ************************************ 00:17:25.919 20:12:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:25.919 20:12:23 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:25.919 20:12:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:25.919 20:12:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.919 20:12:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:26.178 ************************************ 00:17:26.178 START TEST nvmf_bdevio 00:17:26.178 ************************************ 00:17:26.178 20:12:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:26.178 * Looking for test storage... 00:17:26.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:26.178 20:12:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.178 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:26.178 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.178 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.178 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.178 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.178 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.178 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.178 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.178 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.178 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.178 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.178 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.178 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.178 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:26.179 20:12:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.339 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:34.340 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:34.340 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:34.340 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:34.340 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:34.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:17:34.340 00:17:34.340 --- 10.0.0.2 ping statistics --- 00:17:34.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.340 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:34.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:17:34.340 00:17:34.340 --- 10.0.0.1 ping statistics --- 00:17:34.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.340 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=977516 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 977516 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 977516 ']' 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.340 20:12:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:34.340 [2024-07-15 20:12:30.774801] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:17:34.340 [2024-07-15 20:12:30.774893] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.340 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.340 [2024-07-15 20:12:30.869621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:34.340 [2024-07-15 20:12:30.965942] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.340 [2024-07-15 20:12:30.966002] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.340 [2024-07-15 20:12:30.966010] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.340 [2024-07-15 20:12:30.966017] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.340 [2024-07-15 20:12:30.966023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.340 [2024-07-15 20:12:30.966217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:34.340 [2024-07-15 20:12:30.966273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:34.340 [2024-07-15 20:12:30.966422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:34.340 [2024-07-15 20:12:30.966423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:34.340 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.340 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:34.340 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:34.340 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:34.340 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:34.340 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.340 20:12:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:34.340 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.340 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:34.340 [2024-07-15 20:12:31.614325] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.340 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.340 20:12:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:34.340 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.340 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:34.340 Malloc0 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:34.341 [2024-07-15 20:12:31.679823] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:34.341 { 00:17:34.341 "params": { 00:17:34.341 "name": "Nvme$subsystem", 00:17:34.341 "trtype": "$TEST_TRANSPORT", 00:17:34.341 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:34.341 "adrfam": "ipv4", 00:17:34.341 "trsvcid": "$NVMF_PORT", 00:17:34.341 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:34.341 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:34.341 "hdgst": ${hdgst:-false}, 00:17:34.341 "ddgst": ${ddgst:-false} 00:17:34.341 }, 00:17:34.341 "method": "bdev_nvme_attach_controller" 00:17:34.341 } 00:17:34.341 EOF 00:17:34.341 )") 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:34.341 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:34.341 "params": { 00:17:34.341 "name": "Nvme1", 00:17:34.341 "trtype": "tcp", 00:17:34.341 "traddr": "10.0.0.2", 00:17:34.341 "adrfam": "ipv4", 00:17:34.341 "trsvcid": "4420", 00:17:34.341 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.341 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:34.341 "hdgst": false, 00:17:34.341 "ddgst": false 00:17:34.341 }, 00:17:34.341 "method": "bdev_nvme_attach_controller" 00:17:34.341 }' 00:17:34.341 [2024-07-15 20:12:31.735975] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:17:34.341 [2024-07-15 20:12:31.736047] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid977861 ] 00:17:34.341 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.602 [2024-07-15 20:12:31.802515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:34.602 [2024-07-15 20:12:31.878427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.602 [2024-07-15 20:12:31.878548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.602 [2024-07-15 20:12:31.878551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.602 I/O targets: 00:17:34.602 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:34.602 00:17:34.602 00:17:34.602 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.602 http://cunit.sourceforge.net/ 00:17:34.602 00:17:34.602 00:17:34.602 Suite: bdevio tests on: Nvme1n1 00:17:34.863 Test: blockdev write read block ...passed 00:17:34.863 Test: blockdev write zeroes read block ...passed 00:17:34.863 Test: blockdev write zeroes read no split ...passed 00:17:34.863 Test: blockdev write zeroes read split ...passed 00:17:34.863 Test: blockdev write zeroes read split partial ...passed 00:17:34.863 Test: blockdev reset ...[2024-07-15 20:12:32.242549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:34.863 [2024-07-15 20:12:32.242611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14aace0 (9): Bad file descriptor 00:17:34.863 [2024-07-15 20:12:32.259969] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:34.863 passed 00:17:35.124 Test: blockdev write read 8 blocks ...passed 00:17:35.124 Test: blockdev write read size > 128k ...passed 00:17:35.124 Test: blockdev write read invalid size ...passed 00:17:35.124 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:35.124 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:35.124 Test: blockdev write read max offset ...passed 00:17:35.124 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:35.124 Test: blockdev writev readv 8 blocks ...passed 00:17:35.124 Test: blockdev writev readv 30 x 1block ...passed 00:17:35.124 Test: blockdev writev readv block ...passed 00:17:35.124 Test: blockdev writev readv size > 128k ...passed 00:17:35.124 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:35.124 Test: blockdev comparev and writev ...[2024-07-15 20:12:32.531024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.124 [2024-07-15 20:12:32.531050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.124 [2024-07-15 20:12:32.531061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.124 [2024-07-15 20:12:32.531066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:35.124 [2024-07-15 20:12:32.531673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.124 [2024-07-15 20:12:32.531682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:35.124 [2024-07-15 20:12:32.531691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.125 [2024-07-15 20:12:32.531696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:35.125 [2024-07-15 20:12:32.532297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.125 [2024-07-15 20:12:32.532305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:35.125 [2024-07-15 20:12:32.532315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.125 [2024-07-15 20:12:32.532319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:35.125 [2024-07-15 20:12:32.532918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.125 [2024-07-15 20:12:32.532925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:35.125 [2024-07-15 20:12:32.532934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.125 [2024-07-15 20:12:32.532939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:35.386 passed 00:17:35.386 Test: blockdev nvme passthru rw ...passed 00:17:35.386 Test: blockdev nvme passthru vendor specific ...[2024-07-15 20:12:32.618219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.386 [2024-07-15 20:12:32.618231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:35.386 [2024-07-15 20:12:32.618665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.386 [2024-07-15 20:12:32.618672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:35.386 [2024-07-15 20:12:32.619160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.386 [2024-07-15 20:12:32.619167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:35.386 [2024-07-15 20:12:32.619615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.386 [2024-07-15 20:12:32.619622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:35.386 passed 00:17:35.386 Test: blockdev nvme admin passthru ...passed 00:17:35.386 Test: blockdev copy ...passed 00:17:35.386 00:17:35.386 Run Summary: Type Total Ran Passed Failed Inactive 00:17:35.386 suites 1 1 n/a 0 0 00:17:35.386 tests 23 23 23 0 0 00:17:35.386 asserts 152 152 152 0 n/a 00:17:35.386 00:17:35.386 Elapsed time = 1.314 seconds 00:17:35.386 20:12:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.386 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.386 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:35.386 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.386 20:12:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:35.386 20:12:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:35.386 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:35.386 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:35.386 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.386 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:35.386 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.386 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.386 rmmod nvme_tcp 00:17:35.647 rmmod nvme_fabrics 00:17:35.647 rmmod nvme_keyring 00:17:35.647 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.647 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:35.647 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:35.647 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 977516 ']' 00:17:35.647 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 977516 00:17:35.647 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 977516 ']' 00:17:35.647 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 977516 00:17:35.647 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:35.647 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.647 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 977516 00:17:35.647 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:35.647 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:35.647 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 977516' 00:17:35.647 killing process with pid 977516 00:17:35.647 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 977516 00:17:35.647 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 977516 00:17:35.647 20:12:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.647 20:12:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.647 20:12:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.647 20:12:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.647 20:12:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.647 20:12:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.647 20:12:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.907 20:12:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.816 20:12:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:37.816 00:17:37.816 real 0m11.784s 00:17:37.816 user 0m12.650s 00:17:37.816 sys 0m5.846s 00:17:37.816 20:12:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:37.816 20:12:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:37.816 ************************************ 00:17:37.816 END TEST nvmf_bdevio 00:17:37.816 ************************************ 00:17:37.816 20:12:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:37.816 20:12:35 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:37.816 20:12:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:37.816 20:12:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:37.816 20:12:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:37.816 ************************************ 00:17:37.816 START TEST nvmf_auth_target 00:17:37.816 ************************************ 00:17:37.816 20:12:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:38.076 * Looking for test storage... 00:17:38.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.076 20:12:35 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:38.077 20:12:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:46.279 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:46.279 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:46.279 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:46.279 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:46.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:17:46.279 00:17:46.279 --- 10.0.0.2 ping statistics --- 00:17:46.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.279 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:17:46.279 00:17:46.279 --- 10.0.0.1 ping statistics --- 00:17:46.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.279 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:17:46.279 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=982197 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 982197 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 982197 ']' 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.280 20:12:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=982329 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e90b53a87d7cb0d4f2368df3c01772a35f26ba29a8766bee 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.2gM 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e90b53a87d7cb0d4f2368df3c01772a35f26ba29a8766bee 0 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e90b53a87d7cb0d4f2368df3c01772a35f26ba29a8766bee 0 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e90b53a87d7cb0d4f2368df3c01772a35f26ba29a8766bee 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.2gM 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.2gM 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.2gM 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9198be37230c13091fbb7729c4e559e60c77c33ec77b509b5dde31bf8218e44c 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.8ZE 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9198be37230c13091fbb7729c4e559e60c77c33ec77b509b5dde31bf8218e44c 3 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9198be37230c13091fbb7729c4e559e60c77c33ec77b509b5dde31bf8218e44c 3 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9198be37230c13091fbb7729c4e559e60c77c33ec77b509b5dde31bf8218e44c 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.8ZE 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.8ZE 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.8ZE 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=28d73d7c02fc2fead124da240b983d96 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.yMh 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 28d73d7c02fc2fead124da240b983d96 1 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 28d73d7c02fc2fead124da240b983d96 1 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=28d73d7c02fc2fead124da240b983d96 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.yMh 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.yMh 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.yMh 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9d5a37e7303c0515a9da47f63fe750eb585dbbf9f3910325 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.3aj 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9d5a37e7303c0515a9da47f63fe750eb585dbbf9f3910325 2 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9d5a37e7303c0515a9da47f63fe750eb585dbbf9f3910325 2 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9d5a37e7303c0515a9da47f63fe750eb585dbbf9f3910325 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.3aj 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.3aj 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.3aj 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=eb3ad737dd7b2058f59437a98e08a01249fee174cb0597e2 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.FY1 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key eb3ad737dd7b2058f59437a98e08a01249fee174cb0597e2 2 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 eb3ad737dd7b2058f59437a98e08a01249fee174cb0597e2 2 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=eb3ad737dd7b2058f59437a98e08a01249fee174cb0597e2 00:17:46.280 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:46.281 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:46.281 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.FY1 00:17:46.281 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.FY1 00:17:46.281 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.FY1 00:17:46.281 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:46.281 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:46.281 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.281 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:46.281 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:46.281 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:46.281 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:46.281 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f2a49eb44c5c565b408b42ad0dff01ab 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.rnF 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f2a49eb44c5c565b408b42ad0dff01ab 1 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f2a49eb44c5c565b408b42ad0dff01ab 1 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f2a49eb44c5c565b408b42ad0dff01ab 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.rnF 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.rnF 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.rnF 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=13d04899a366279f91480a9fa95439d3a2e7879c410d626f8e39dab554b64d0e 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.RxJ 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 13d04899a366279f91480a9fa95439d3a2e7879c410d626f8e39dab554b64d0e 3 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 13d04899a366279f91480a9fa95439d3a2e7879c410d626f8e39dab554b64d0e 3 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=13d04899a366279f91480a9fa95439d3a2e7879c410d626f8e39dab554b64d0e 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.RxJ 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.RxJ 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.RxJ 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:46.541 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 982197 00:17:46.542 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 982197 ']' 00:17:46.542 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.542 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.542 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.542 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.542 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.542 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.542 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:46.542 20:12:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 982329 /var/tmp/host.sock 00:17:46.542 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 982329 ']' 00:17:46.542 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:46.542 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.542 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:46.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:46.542 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.542 20:12:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.802 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.802 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:46.802 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:46.802 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.802 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.802 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.802 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:46.802 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.2gM 00:17:46.802 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.802 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.802 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.802 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.2gM 00:17:46.802 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.2gM 00:17:47.064 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.8ZE ]] 00:17:47.064 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8ZE 00:17:47.064 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.064 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.064 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.064 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8ZE 00:17:47.064 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8ZE 00:17:47.326 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:47.326 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.yMh 00:17:47.326 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.326 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.326 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.326 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.yMh 00:17:47.326 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.yMh 00:17:47.326 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.3aj ]] 00:17:47.326 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3aj 00:17:47.326 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.326 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.326 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.326 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3aj 00:17:47.326 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3aj 00:17:47.587 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:47.587 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.FY1 00:17:47.587 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.587 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.587 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.587 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.FY1 00:17:47.587 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.FY1 00:17:47.587 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.rnF ]] 00:17:47.587 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rnF 00:17:47.587 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.587 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.847 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.847 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rnF 00:17:47.848 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rnF 00:17:47.848 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:47.848 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.RxJ 00:17:47.848 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.848 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.848 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.848 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.RxJ 00:17:47.848 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.RxJ 00:17:48.108 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:48.108 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:48.108 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.108 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.108 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:48.108 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:48.108 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:48.108 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.108 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:48.108 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:48.108 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:48.108 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.108 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.108 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.108 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.108 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.108 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.108 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.368 00:17:48.368 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.368 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.368 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.628 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.628 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.628 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.628 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.628 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.628 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.628 { 00:17:48.628 "cntlid": 1, 00:17:48.628 "qid": 0, 00:17:48.628 "state": "enabled", 00:17:48.628 "thread": "nvmf_tgt_poll_group_000", 00:17:48.628 "listen_address": { 00:17:48.628 "trtype": "TCP", 00:17:48.628 "adrfam": "IPv4", 00:17:48.628 "traddr": "10.0.0.2", 00:17:48.628 "trsvcid": "4420" 00:17:48.628 }, 00:17:48.628 "peer_address": { 00:17:48.628 "trtype": "TCP", 00:17:48.628 "adrfam": "IPv4", 00:17:48.628 "traddr": "10.0.0.1", 00:17:48.628 "trsvcid": "47746" 00:17:48.628 }, 00:17:48.628 "auth": { 00:17:48.628 "state": "completed", 00:17:48.628 "digest": "sha256", 00:17:48.628 "dhgroup": "null" 00:17:48.628 } 00:17:48.628 } 00:17:48.628 ]' 00:17:48.628 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.628 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.628 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.628 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:48.628 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.628 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.628 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.628 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.887 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:17:49.825 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.825 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.825 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.825 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.825 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.825 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.825 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.825 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.825 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:49.825 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.825 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.825 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:49.825 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:49.825 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.825 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.826 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.826 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.826 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.826 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.826 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.085 00:17:50.085 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.085 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.085 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.344 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.344 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.344 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.344 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.344 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.344 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.344 { 00:17:50.344 "cntlid": 3, 00:17:50.344 "qid": 0, 00:17:50.344 "state": "enabled", 00:17:50.344 "thread": "nvmf_tgt_poll_group_000", 00:17:50.344 "listen_address": { 00:17:50.344 "trtype": "TCP", 00:17:50.344 "adrfam": "IPv4", 00:17:50.344 "traddr": "10.0.0.2", 00:17:50.344 "trsvcid": "4420" 00:17:50.344 }, 00:17:50.344 "peer_address": { 00:17:50.344 "trtype": "TCP", 00:17:50.344 "adrfam": "IPv4", 00:17:50.344 "traddr": "10.0.0.1", 00:17:50.344 "trsvcid": "47768" 00:17:50.344 }, 00:17:50.344 "auth": { 00:17:50.344 "state": "completed", 00:17:50.344 "digest": "sha256", 00:17:50.344 "dhgroup": "null" 00:17:50.344 } 00:17:50.344 } 00:17:50.344 ]' 00:17:50.344 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.344 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.344 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.344 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:50.344 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.344 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.344 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.344 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.604 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MjhkNzNkN2MwMmZjMmZlYWQxMjRkYTI0MGI5ODNkOTZdoOAg: --dhchap-ctrl-secret DHHC-1:02:OWQ1YTM3ZTczMDNjMDUxNWE5ZGE0N2Y2M2ZlNzUwZWI1ODVkYmJmOWYzOTEwMzI1m8wtMA==: 00:17:51.173 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.431 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.431 20:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.431 20:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.432 20:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.432 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.432 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:51.432 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:51.432 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:51.432 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.432 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:51.432 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:51.432 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:51.432 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.432 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.432 20:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.432 20:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.432 20:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.432 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.432 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.693 00:17:51.693 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.693 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.693 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.954 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.954 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.954 20:12:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.954 20:12:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.954 20:12:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.954 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.954 { 00:17:51.954 "cntlid": 5, 00:17:51.954 "qid": 0, 00:17:51.954 "state": "enabled", 00:17:51.954 "thread": "nvmf_tgt_poll_group_000", 00:17:51.954 "listen_address": { 00:17:51.954 "trtype": "TCP", 00:17:51.954 "adrfam": "IPv4", 00:17:51.954 "traddr": "10.0.0.2", 00:17:51.954 "trsvcid": "4420" 00:17:51.954 }, 00:17:51.954 "peer_address": { 00:17:51.954 "trtype": "TCP", 00:17:51.954 "adrfam": "IPv4", 00:17:51.954 "traddr": "10.0.0.1", 00:17:51.954 "trsvcid": "51864" 00:17:51.954 }, 00:17:51.954 "auth": { 00:17:51.954 "state": "completed", 00:17:51.954 "digest": "sha256", 00:17:51.954 "dhgroup": "null" 00:17:51.954 } 00:17:51.954 } 00:17:51.954 ]' 00:17:51.954 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.954 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.954 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.954 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:51.954 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.954 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.954 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.954 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.214 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZWIzYWQ3MzdkZDdiMjA1OGY1OTQzN2E5OGUwOGEwMTI0OWZlZTE3NGNiMDU5N2Uyhd5Z2w==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNDllYjQ0YzVjNTY1YjQwOGI0MmFkMGRmZjAxYWKP0/mJ: 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.185 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.447 00:17:53.447 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.447 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.447 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.708 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.708 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.708 20:12:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.708 20:12:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.708 20:12:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.708 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.708 { 00:17:53.708 "cntlid": 7, 00:17:53.708 "qid": 0, 00:17:53.708 "state": "enabled", 00:17:53.708 "thread": "nvmf_tgt_poll_group_000", 00:17:53.708 "listen_address": { 00:17:53.708 "trtype": "TCP", 00:17:53.708 "adrfam": "IPv4", 00:17:53.708 "traddr": "10.0.0.2", 00:17:53.708 "trsvcid": "4420" 00:17:53.708 }, 00:17:53.708 "peer_address": { 00:17:53.708 "trtype": "TCP", 00:17:53.708 "adrfam": "IPv4", 00:17:53.708 "traddr": "10.0.0.1", 00:17:53.708 "trsvcid": "51898" 00:17:53.708 }, 00:17:53.708 "auth": { 00:17:53.708 "state": "completed", 00:17:53.708 "digest": "sha256", 00:17:53.708 "dhgroup": "null" 00:17:53.708 } 00:17:53.708 } 00:17:53.708 ]' 00:17:53.708 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.708 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.708 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.708 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:53.708 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.708 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.708 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.708 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.969 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:17:54.541 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.541 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.541 20:12:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.541 20:12:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.541 20:12:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.541 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.541 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.541 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:54.541 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:54.801 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:54.801 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.801 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:54.801 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:54.801 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:54.801 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.801 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.801 20:12:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.801 20:12:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.801 20:12:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.801 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.801 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.061 00:17:55.061 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.061 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.061 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.322 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.322 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.322 20:12:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.322 20:12:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.322 20:12:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.322 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.322 { 00:17:55.322 "cntlid": 9, 00:17:55.322 "qid": 0, 00:17:55.322 "state": "enabled", 00:17:55.322 "thread": "nvmf_tgt_poll_group_000", 00:17:55.322 "listen_address": { 00:17:55.322 "trtype": "TCP", 00:17:55.322 "adrfam": "IPv4", 00:17:55.322 "traddr": "10.0.0.2", 00:17:55.322 "trsvcid": "4420" 00:17:55.322 }, 00:17:55.322 "peer_address": { 00:17:55.322 "trtype": "TCP", 00:17:55.322 "adrfam": "IPv4", 00:17:55.322 "traddr": "10.0.0.1", 00:17:55.322 "trsvcid": "51930" 00:17:55.322 }, 00:17:55.322 "auth": { 00:17:55.322 "state": "completed", 00:17:55.322 "digest": "sha256", 00:17:55.322 "dhgroup": "ffdhe2048" 00:17:55.322 } 00:17:55.322 } 00:17:55.322 ]' 00:17:55.322 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.322 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.322 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.322 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:55.322 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.322 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.322 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.322 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.583 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:17:56.170 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.170 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.170 20:12:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.170 20:12:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.431 20:12:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.431 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.431 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:56.431 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:56.431 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:56.431 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.431 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.431 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:56.431 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:56.431 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.431 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.431 20:12:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.431 20:12:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.431 20:12:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.431 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.431 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.691 00:17:56.691 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.691 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.691 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.952 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.952 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.952 20:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.952 20:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.952 20:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.952 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.952 { 00:17:56.952 "cntlid": 11, 00:17:56.952 "qid": 0, 00:17:56.952 "state": "enabled", 00:17:56.952 "thread": "nvmf_tgt_poll_group_000", 00:17:56.952 "listen_address": { 00:17:56.952 "trtype": "TCP", 00:17:56.952 "adrfam": "IPv4", 00:17:56.952 "traddr": "10.0.0.2", 00:17:56.952 "trsvcid": "4420" 00:17:56.952 }, 00:17:56.952 "peer_address": { 00:17:56.952 "trtype": "TCP", 00:17:56.952 "adrfam": "IPv4", 00:17:56.952 "traddr": "10.0.0.1", 00:17:56.952 "trsvcid": "51942" 00:17:56.952 }, 00:17:56.952 "auth": { 00:17:56.952 "state": "completed", 00:17:56.952 "digest": "sha256", 00:17:56.952 "dhgroup": "ffdhe2048" 00:17:56.952 } 00:17:56.952 } 00:17:56.952 ]' 00:17:56.952 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.952 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.952 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.952 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:56.952 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.952 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.952 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.952 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.211 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MjhkNzNkN2MwMmZjMmZlYWQxMjRkYTI0MGI5ODNkOTZdoOAg: --dhchap-ctrl-secret DHHC-1:02:OWQ1YTM3ZTczMDNjMDUxNWE5ZGE0N2Y2M2ZlNzUwZWI1ODVkYmJmOWYzOTEwMzI1m8wtMA==: 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.192 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.193 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.451 00:17:58.451 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.451 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.451 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.451 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.451 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.451 20:12:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.451 20:12:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.451 20:12:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.451 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.451 { 00:17:58.451 "cntlid": 13, 00:17:58.451 "qid": 0, 00:17:58.451 "state": "enabled", 00:17:58.451 "thread": "nvmf_tgt_poll_group_000", 00:17:58.451 "listen_address": { 00:17:58.451 "trtype": "TCP", 00:17:58.451 "adrfam": "IPv4", 00:17:58.452 "traddr": "10.0.0.2", 00:17:58.452 "trsvcid": "4420" 00:17:58.452 }, 00:17:58.452 "peer_address": { 00:17:58.452 "trtype": "TCP", 00:17:58.452 "adrfam": "IPv4", 00:17:58.452 "traddr": "10.0.0.1", 00:17:58.452 "trsvcid": "51962" 00:17:58.452 }, 00:17:58.452 "auth": { 00:17:58.452 "state": "completed", 00:17:58.452 "digest": "sha256", 00:17:58.452 "dhgroup": "ffdhe2048" 00:17:58.452 } 00:17:58.452 } 00:17:58.452 ]' 00:17:58.452 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.711 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.711 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.711 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:58.711 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.711 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.711 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.711 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.971 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZWIzYWQ3MzdkZDdiMjA1OGY1OTQzN2E5OGUwOGEwMTI0OWZlZTE3NGNiMDU5N2Uyhd5Z2w==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNDllYjQ0YzVjNTY1YjQwOGI0MmFkMGRmZjAxYWKP0/mJ: 00:17:59.541 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.541 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.541 20:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.541 20:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.541 20:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.541 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.541 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:59.541 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:59.801 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:59.801 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.801 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:59.801 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:59.801 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:59.801 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.801 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:59.801 20:12:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.801 20:12:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.801 20:12:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.801 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.801 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.061 00:18:00.061 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.061 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.062 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.322 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.322 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.322 20:12:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.322 20:12:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.322 20:12:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.322 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.322 { 00:18:00.322 "cntlid": 15, 00:18:00.322 "qid": 0, 00:18:00.322 "state": "enabled", 00:18:00.322 "thread": "nvmf_tgt_poll_group_000", 00:18:00.322 "listen_address": { 00:18:00.322 "trtype": "TCP", 00:18:00.322 "adrfam": "IPv4", 00:18:00.322 "traddr": "10.0.0.2", 00:18:00.322 "trsvcid": "4420" 00:18:00.322 }, 00:18:00.322 "peer_address": { 00:18:00.322 "trtype": "TCP", 00:18:00.322 "adrfam": "IPv4", 00:18:00.322 "traddr": "10.0.0.1", 00:18:00.322 "trsvcid": "51986" 00:18:00.322 }, 00:18:00.322 "auth": { 00:18:00.322 "state": "completed", 00:18:00.322 "digest": "sha256", 00:18:00.322 "dhgroup": "ffdhe2048" 00:18:00.322 } 00:18:00.322 } 00:18:00.322 ]' 00:18:00.322 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.322 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.322 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.322 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:00.322 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.322 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.322 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.322 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.583 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:18:01.154 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.154 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.154 20:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.154 20:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.154 20:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.154 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.154 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.154 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:01.154 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:01.414 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:01.414 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.414 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:01.414 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:01.414 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:01.414 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.414 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.414 20:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.414 20:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.414 20:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.414 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.414 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.674 00:18:01.674 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.674 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.674 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.934 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.934 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.934 20:12:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.934 20:12:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.934 20:12:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.934 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.934 { 00:18:01.934 "cntlid": 17, 00:18:01.934 "qid": 0, 00:18:01.934 "state": "enabled", 00:18:01.934 "thread": "nvmf_tgt_poll_group_000", 00:18:01.934 "listen_address": { 00:18:01.934 "trtype": "TCP", 00:18:01.934 "adrfam": "IPv4", 00:18:01.934 "traddr": "10.0.0.2", 00:18:01.934 "trsvcid": "4420" 00:18:01.934 }, 00:18:01.934 "peer_address": { 00:18:01.934 "trtype": "TCP", 00:18:01.934 "adrfam": "IPv4", 00:18:01.934 "traddr": "10.0.0.1", 00:18:01.934 "trsvcid": "37176" 00:18:01.934 }, 00:18:01.934 "auth": { 00:18:01.934 "state": "completed", 00:18:01.934 "digest": "sha256", 00:18:01.934 "dhgroup": "ffdhe3072" 00:18:01.934 } 00:18:01.934 } 00:18:01.934 ]' 00:18:01.934 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.934 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.934 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.934 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:01.934 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.934 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.934 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.934 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.195 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.135 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.136 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.397 00:18:03.397 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.397 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.397 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.658 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.658 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.658 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.658 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.658 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.658 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.658 { 00:18:03.658 "cntlid": 19, 00:18:03.658 "qid": 0, 00:18:03.658 "state": "enabled", 00:18:03.658 "thread": "nvmf_tgt_poll_group_000", 00:18:03.658 "listen_address": { 00:18:03.658 "trtype": "TCP", 00:18:03.658 "adrfam": "IPv4", 00:18:03.658 "traddr": "10.0.0.2", 00:18:03.658 "trsvcid": "4420" 00:18:03.658 }, 00:18:03.658 "peer_address": { 00:18:03.658 "trtype": "TCP", 00:18:03.658 "adrfam": "IPv4", 00:18:03.658 "traddr": "10.0.0.1", 00:18:03.658 "trsvcid": "37208" 00:18:03.658 }, 00:18:03.658 "auth": { 00:18:03.658 "state": "completed", 00:18:03.658 "digest": "sha256", 00:18:03.658 "dhgroup": "ffdhe3072" 00:18:03.658 } 00:18:03.658 } 00:18:03.658 ]' 00:18:03.658 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.658 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.658 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.658 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:03.658 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.658 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.658 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.658 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.919 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MjhkNzNkN2MwMmZjMmZlYWQxMjRkYTI0MGI5ODNkOTZdoOAg: --dhchap-ctrl-secret DHHC-1:02:OWQ1YTM3ZTczMDNjMDUxNWE5ZGE0N2Y2M2ZlNzUwZWI1ODVkYmJmOWYzOTEwMzI1m8wtMA==: 00:18:04.496 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.496 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.496 20:13:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.496 20:13:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.496 20:13:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.496 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.496 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:04.496 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:04.757 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:04.757 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.757 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.757 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:04.757 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:04.757 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.757 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.757 20:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.757 20:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.757 20:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.758 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.758 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.019 00:18:05.019 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.019 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.019 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.281 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.281 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.281 20:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.281 20:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.281 20:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.281 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.281 { 00:18:05.281 "cntlid": 21, 00:18:05.281 "qid": 0, 00:18:05.281 "state": "enabled", 00:18:05.281 "thread": "nvmf_tgt_poll_group_000", 00:18:05.281 "listen_address": { 00:18:05.281 "trtype": "TCP", 00:18:05.281 "adrfam": "IPv4", 00:18:05.281 "traddr": "10.0.0.2", 00:18:05.281 "trsvcid": "4420" 00:18:05.281 }, 00:18:05.281 "peer_address": { 00:18:05.281 "trtype": "TCP", 00:18:05.281 "adrfam": "IPv4", 00:18:05.281 "traddr": "10.0.0.1", 00:18:05.281 "trsvcid": "37242" 00:18:05.281 }, 00:18:05.281 "auth": { 00:18:05.281 "state": "completed", 00:18:05.281 "digest": "sha256", 00:18:05.281 "dhgroup": "ffdhe3072" 00:18:05.281 } 00:18:05.281 } 00:18:05.281 ]' 00:18:05.281 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.281 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.281 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.281 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:05.281 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.281 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.281 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.281 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.542 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZWIzYWQ3MzdkZDdiMjA1OGY1OTQzN2E5OGUwOGEwMTI0OWZlZTE3NGNiMDU5N2Uyhd5Z2w==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNDllYjQ0YzVjNTY1YjQwOGI0MmFkMGRmZjAxYWKP0/mJ: 00:18:06.112 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.112 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.112 20:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.112 20:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.372 20:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.372 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.372 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:06.372 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:06.372 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:06.372 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.372 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.372 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:06.372 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:06.372 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.372 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:06.372 20:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.372 20:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.372 20:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.373 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.373 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.633 00:18:06.633 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.633 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.633 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.893 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.893 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.893 20:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.893 20:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.893 20:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.893 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.893 { 00:18:06.893 "cntlid": 23, 00:18:06.893 "qid": 0, 00:18:06.893 "state": "enabled", 00:18:06.893 "thread": "nvmf_tgt_poll_group_000", 00:18:06.893 "listen_address": { 00:18:06.893 "trtype": "TCP", 00:18:06.893 "adrfam": "IPv4", 00:18:06.893 "traddr": "10.0.0.2", 00:18:06.893 "trsvcid": "4420" 00:18:06.893 }, 00:18:06.893 "peer_address": { 00:18:06.893 "trtype": "TCP", 00:18:06.893 "adrfam": "IPv4", 00:18:06.893 "traddr": "10.0.0.1", 00:18:06.893 "trsvcid": "37264" 00:18:06.893 }, 00:18:06.893 "auth": { 00:18:06.893 "state": "completed", 00:18:06.893 "digest": "sha256", 00:18:06.893 "dhgroup": "ffdhe3072" 00:18:06.893 } 00:18:06.893 } 00:18:06.893 ]' 00:18:06.893 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.893 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.893 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.893 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:06.893 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.893 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.893 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.893 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.154 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.093 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.354 00:18:08.354 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.354 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.354 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.614 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.614 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.614 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.614 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.614 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.614 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.614 { 00:18:08.614 "cntlid": 25, 00:18:08.614 "qid": 0, 00:18:08.614 "state": "enabled", 00:18:08.614 "thread": "nvmf_tgt_poll_group_000", 00:18:08.614 "listen_address": { 00:18:08.614 "trtype": "TCP", 00:18:08.614 "adrfam": "IPv4", 00:18:08.614 "traddr": "10.0.0.2", 00:18:08.614 "trsvcid": "4420" 00:18:08.614 }, 00:18:08.614 "peer_address": { 00:18:08.614 "trtype": "TCP", 00:18:08.614 "adrfam": "IPv4", 00:18:08.614 "traddr": "10.0.0.1", 00:18:08.614 "trsvcid": "37284" 00:18:08.614 }, 00:18:08.614 "auth": { 00:18:08.614 "state": "completed", 00:18:08.614 "digest": "sha256", 00:18:08.614 "dhgroup": "ffdhe4096" 00:18:08.614 } 00:18:08.614 } 00:18:08.614 ]' 00:18:08.614 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.614 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.614 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.614 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.614 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.614 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.614 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.614 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.875 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:18:09.446 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.446 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.446 20:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.446 20:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.446 20:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.447 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.447 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:09.447 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:09.708 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:09.708 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.708 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.708 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:09.708 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:09.708 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.708 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.708 20:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.708 20:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.708 20:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.708 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.708 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.968 00:18:09.969 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.969 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.969 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.229 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.229 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.229 20:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.229 20:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.229 20:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.229 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.229 { 00:18:10.229 "cntlid": 27, 00:18:10.229 "qid": 0, 00:18:10.229 "state": "enabled", 00:18:10.229 "thread": "nvmf_tgt_poll_group_000", 00:18:10.229 "listen_address": { 00:18:10.229 "trtype": "TCP", 00:18:10.229 "adrfam": "IPv4", 00:18:10.229 "traddr": "10.0.0.2", 00:18:10.229 "trsvcid": "4420" 00:18:10.229 }, 00:18:10.229 "peer_address": { 00:18:10.229 "trtype": "TCP", 00:18:10.229 "adrfam": "IPv4", 00:18:10.229 "traddr": "10.0.0.1", 00:18:10.229 "trsvcid": "37308" 00:18:10.229 }, 00:18:10.229 "auth": { 00:18:10.229 "state": "completed", 00:18:10.229 "digest": "sha256", 00:18:10.229 "dhgroup": "ffdhe4096" 00:18:10.229 } 00:18:10.229 } 00:18:10.229 ]' 00:18:10.229 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.229 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.229 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.229 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:10.229 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.229 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.229 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.229 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.490 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MjhkNzNkN2MwMmZjMmZlYWQxMjRkYTI0MGI5ODNkOTZdoOAg: --dhchap-ctrl-secret DHHC-1:02:OWQ1YTM3ZTczMDNjMDUxNWE5ZGE0N2Y2M2ZlNzUwZWI1ODVkYmJmOWYzOTEwMzI1m8wtMA==: 00:18:11.062 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.323 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.584 00:18:11.584 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.584 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.584 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.845 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.845 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.845 20:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.845 20:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.845 20:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.845 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.845 { 00:18:11.845 "cntlid": 29, 00:18:11.845 "qid": 0, 00:18:11.845 "state": "enabled", 00:18:11.845 "thread": "nvmf_tgt_poll_group_000", 00:18:11.845 "listen_address": { 00:18:11.845 "trtype": "TCP", 00:18:11.845 "adrfam": "IPv4", 00:18:11.845 "traddr": "10.0.0.2", 00:18:11.845 "trsvcid": "4420" 00:18:11.845 }, 00:18:11.845 "peer_address": { 00:18:11.845 "trtype": "TCP", 00:18:11.845 "adrfam": "IPv4", 00:18:11.845 "traddr": "10.0.0.1", 00:18:11.845 "trsvcid": "55726" 00:18:11.845 }, 00:18:11.845 "auth": { 00:18:11.845 "state": "completed", 00:18:11.845 "digest": "sha256", 00:18:11.845 "dhgroup": "ffdhe4096" 00:18:11.845 } 00:18:11.845 } 00:18:11.845 ]' 00:18:11.845 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.845 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.845 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.845 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:11.845 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.845 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.845 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.845 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.105 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZWIzYWQ3MzdkZDdiMjA1OGY1OTQzN2E5OGUwOGEwMTI0OWZlZTE3NGNiMDU5N2Uyhd5Z2w==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNDllYjQ0YzVjNTY1YjQwOGI0MmFkMGRmZjAxYWKP0/mJ: 00:18:13.101 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.101 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.102 20:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.102 20:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.102 20:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.102 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.102 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:13.102 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:13.102 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:13.102 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.102 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.102 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:13.102 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:13.102 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.102 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:13.102 20:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.102 20:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.102 20:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.102 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.102 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.362 00:18:13.362 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.362 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.362 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.623 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.623 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.623 20:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.623 20:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.623 20:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.623 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.623 { 00:18:13.623 "cntlid": 31, 00:18:13.623 "qid": 0, 00:18:13.623 "state": "enabled", 00:18:13.623 "thread": "nvmf_tgt_poll_group_000", 00:18:13.623 "listen_address": { 00:18:13.623 "trtype": "TCP", 00:18:13.623 "adrfam": "IPv4", 00:18:13.623 "traddr": "10.0.0.2", 00:18:13.623 "trsvcid": "4420" 00:18:13.623 }, 00:18:13.623 "peer_address": { 00:18:13.623 "trtype": "TCP", 00:18:13.623 "adrfam": "IPv4", 00:18:13.623 "traddr": "10.0.0.1", 00:18:13.623 "trsvcid": "55752" 00:18:13.623 }, 00:18:13.623 "auth": { 00:18:13.623 "state": "completed", 00:18:13.623 "digest": "sha256", 00:18:13.623 "dhgroup": "ffdhe4096" 00:18:13.623 } 00:18:13.623 } 00:18:13.623 ]' 00:18:13.623 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.623 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.623 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.623 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:13.623 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.623 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.623 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.623 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.884 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:18:14.456 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.456 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.456 20:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.456 20:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.456 20:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.456 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.456 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.456 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:14.456 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:14.717 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:14.717 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.717 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.717 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:14.717 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:14.717 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.717 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.717 20:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.717 20:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.717 20:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.717 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.717 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.978 00:18:15.239 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.239 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.239 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.239 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.239 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.239 20:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.239 20:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.239 20:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.239 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.239 { 00:18:15.239 "cntlid": 33, 00:18:15.239 "qid": 0, 00:18:15.239 "state": "enabled", 00:18:15.239 "thread": "nvmf_tgt_poll_group_000", 00:18:15.239 "listen_address": { 00:18:15.239 "trtype": "TCP", 00:18:15.239 "adrfam": "IPv4", 00:18:15.239 "traddr": "10.0.0.2", 00:18:15.239 "trsvcid": "4420" 00:18:15.239 }, 00:18:15.239 "peer_address": { 00:18:15.239 "trtype": "TCP", 00:18:15.239 "adrfam": "IPv4", 00:18:15.239 "traddr": "10.0.0.1", 00:18:15.239 "trsvcid": "55778" 00:18:15.239 }, 00:18:15.239 "auth": { 00:18:15.239 "state": "completed", 00:18:15.239 "digest": "sha256", 00:18:15.239 "dhgroup": "ffdhe6144" 00:18:15.239 } 00:18:15.239 } 00:18:15.239 ]' 00:18:15.239 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.239 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.239 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.501 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:15.501 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.501 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.501 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.501 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.501 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:18:16.442 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.443 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.014 00:18:17.014 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.014 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.014 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.014 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.014 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.014 20:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.014 20:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.014 20:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.014 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.014 { 00:18:17.014 "cntlid": 35, 00:18:17.014 "qid": 0, 00:18:17.014 "state": "enabled", 00:18:17.014 "thread": "nvmf_tgt_poll_group_000", 00:18:17.014 "listen_address": { 00:18:17.014 "trtype": "TCP", 00:18:17.014 "adrfam": "IPv4", 00:18:17.014 "traddr": "10.0.0.2", 00:18:17.014 "trsvcid": "4420" 00:18:17.014 }, 00:18:17.014 "peer_address": { 00:18:17.014 "trtype": "TCP", 00:18:17.014 "adrfam": "IPv4", 00:18:17.014 "traddr": "10.0.0.1", 00:18:17.014 "trsvcid": "55810" 00:18:17.014 }, 00:18:17.014 "auth": { 00:18:17.014 "state": "completed", 00:18:17.014 "digest": "sha256", 00:18:17.014 "dhgroup": "ffdhe6144" 00:18:17.014 } 00:18:17.014 } 00:18:17.014 ]' 00:18:17.014 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.014 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.014 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.014 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:17.014 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.275 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.275 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.275 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.275 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MjhkNzNkN2MwMmZjMmZlYWQxMjRkYTI0MGI5ODNkOTZdoOAg: --dhchap-ctrl-secret DHHC-1:02:OWQ1YTM3ZTczMDNjMDUxNWE5ZGE0N2Y2M2ZlNzUwZWI1ODVkYmJmOWYzOTEwMzI1m8wtMA==: 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.216 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.788 00:18:18.788 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.788 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.788 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.788 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.788 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.788 20:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.788 20:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.788 20:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.788 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.788 { 00:18:18.788 "cntlid": 37, 00:18:18.788 "qid": 0, 00:18:18.788 "state": "enabled", 00:18:18.788 "thread": "nvmf_tgt_poll_group_000", 00:18:18.788 "listen_address": { 00:18:18.788 "trtype": "TCP", 00:18:18.788 "adrfam": "IPv4", 00:18:18.788 "traddr": "10.0.0.2", 00:18:18.788 "trsvcid": "4420" 00:18:18.788 }, 00:18:18.788 "peer_address": { 00:18:18.788 "trtype": "TCP", 00:18:18.788 "adrfam": "IPv4", 00:18:18.788 "traddr": "10.0.0.1", 00:18:18.788 "trsvcid": "55832" 00:18:18.788 }, 00:18:18.788 "auth": { 00:18:18.788 "state": "completed", 00:18:18.788 "digest": "sha256", 00:18:18.788 "dhgroup": "ffdhe6144" 00:18:18.788 } 00:18:18.788 } 00:18:18.788 ]' 00:18:18.788 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.788 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.788 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.048 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:19.048 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.048 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.048 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.048 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.048 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZWIzYWQ3MzdkZDdiMjA1OGY1OTQzN2E5OGUwOGEwMTI0OWZlZTE3NGNiMDU5N2Uyhd5Z2w==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNDllYjQ0YzVjNTY1YjQwOGI0MmFkMGRmZjAxYWKP0/mJ: 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.990 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.563 00:18:20.563 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.563 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.563 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.563 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.563 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.563 20:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.563 20:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.563 20:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.563 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.563 { 00:18:20.563 "cntlid": 39, 00:18:20.563 "qid": 0, 00:18:20.563 "state": "enabled", 00:18:20.563 "thread": "nvmf_tgt_poll_group_000", 00:18:20.563 "listen_address": { 00:18:20.563 "trtype": "TCP", 00:18:20.563 "adrfam": "IPv4", 00:18:20.563 "traddr": "10.0.0.2", 00:18:20.563 "trsvcid": "4420" 00:18:20.563 }, 00:18:20.563 "peer_address": { 00:18:20.563 "trtype": "TCP", 00:18:20.563 "adrfam": "IPv4", 00:18:20.563 "traddr": "10.0.0.1", 00:18:20.563 "trsvcid": "32866" 00:18:20.563 }, 00:18:20.563 "auth": { 00:18:20.563 "state": "completed", 00:18:20.563 "digest": "sha256", 00:18:20.563 "dhgroup": "ffdhe6144" 00:18:20.563 } 00:18:20.563 } 00:18:20.563 ]' 00:18:20.563 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.563 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.563 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.824 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:20.824 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.824 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.824 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.824 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.824 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:18:21.773 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.773 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.773 20:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.773 20:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.773 20:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.773 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.773 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.773 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:21.773 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:21.773 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:21.773 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.773 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:21.773 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:21.773 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.773 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.773 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.773 20:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.773 20:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.773 20:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.773 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.773 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.345 00:18:22.345 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.345 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.345 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.606 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.606 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.606 20:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.606 20:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.606 20:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.606 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.606 { 00:18:22.606 "cntlid": 41, 00:18:22.606 "qid": 0, 00:18:22.606 "state": "enabled", 00:18:22.606 "thread": "nvmf_tgt_poll_group_000", 00:18:22.606 "listen_address": { 00:18:22.606 "trtype": "TCP", 00:18:22.606 "adrfam": "IPv4", 00:18:22.606 "traddr": "10.0.0.2", 00:18:22.606 "trsvcid": "4420" 00:18:22.606 }, 00:18:22.606 "peer_address": { 00:18:22.606 "trtype": "TCP", 00:18:22.606 "adrfam": "IPv4", 00:18:22.606 "traddr": "10.0.0.1", 00:18:22.606 "trsvcid": "32878" 00:18:22.606 }, 00:18:22.606 "auth": { 00:18:22.606 "state": "completed", 00:18:22.606 "digest": "sha256", 00:18:22.606 "dhgroup": "ffdhe8192" 00:18:22.606 } 00:18:22.606 } 00:18:22.606 ]' 00:18:22.606 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.606 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.606 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.606 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.606 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.606 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.606 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.606 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.866 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:18:23.806 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.806 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.806 20:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.806 20:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.806 20:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.806 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.806 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:23.806 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:23.806 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:23.806 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.806 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:23.806 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:23.806 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:23.806 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.806 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.806 20:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.806 20:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.806 20:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.806 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.806 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.376 00:18:24.376 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.376 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.376 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.376 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.376 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.376 20:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.376 20:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.376 20:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.376 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.376 { 00:18:24.376 "cntlid": 43, 00:18:24.376 "qid": 0, 00:18:24.376 "state": "enabled", 00:18:24.376 "thread": "nvmf_tgt_poll_group_000", 00:18:24.376 "listen_address": { 00:18:24.376 "trtype": "TCP", 00:18:24.376 "adrfam": "IPv4", 00:18:24.376 "traddr": "10.0.0.2", 00:18:24.376 "trsvcid": "4420" 00:18:24.376 }, 00:18:24.376 "peer_address": { 00:18:24.376 "trtype": "TCP", 00:18:24.376 "adrfam": "IPv4", 00:18:24.376 "traddr": "10.0.0.1", 00:18:24.376 "trsvcid": "32912" 00:18:24.376 }, 00:18:24.376 "auth": { 00:18:24.376 "state": "completed", 00:18:24.376 "digest": "sha256", 00:18:24.376 "dhgroup": "ffdhe8192" 00:18:24.376 } 00:18:24.376 } 00:18:24.376 ]' 00:18:24.376 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.636 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.636 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.636 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.636 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.636 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.636 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.636 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.897 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MjhkNzNkN2MwMmZjMmZlYWQxMjRkYTI0MGI5ODNkOTZdoOAg: --dhchap-ctrl-secret DHHC-1:02:OWQ1YTM3ZTczMDNjMDUxNWE5ZGE0N2Y2M2ZlNzUwZWI1ODVkYmJmOWYzOTEwMzI1m8wtMA==: 00:18:25.464 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.464 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.464 20:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.464 20:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.464 20:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.464 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.464 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:25.464 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:25.724 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:25.724 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.724 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:25.724 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:25.724 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:25.724 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.724 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.724 20:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.724 20:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.724 20:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.724 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.724 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.294 00:18:26.294 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.294 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.294 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.294 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.294 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.294 20:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.294 20:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.294 20:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.294 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.294 { 00:18:26.294 "cntlid": 45, 00:18:26.294 "qid": 0, 00:18:26.294 "state": "enabled", 00:18:26.294 "thread": "nvmf_tgt_poll_group_000", 00:18:26.294 "listen_address": { 00:18:26.294 "trtype": "TCP", 00:18:26.294 "adrfam": "IPv4", 00:18:26.294 "traddr": "10.0.0.2", 00:18:26.294 "trsvcid": "4420" 00:18:26.294 }, 00:18:26.294 "peer_address": { 00:18:26.294 "trtype": "TCP", 00:18:26.294 "adrfam": "IPv4", 00:18:26.294 "traddr": "10.0.0.1", 00:18:26.294 "trsvcid": "32928" 00:18:26.294 }, 00:18:26.294 "auth": { 00:18:26.294 "state": "completed", 00:18:26.294 "digest": "sha256", 00:18:26.294 "dhgroup": "ffdhe8192" 00:18:26.294 } 00:18:26.294 } 00:18:26.294 ]' 00:18:26.294 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.553 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:26.553 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.553 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:26.553 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.553 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.553 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.553 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.812 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZWIzYWQ3MzdkZDdiMjA1OGY1OTQzN2E5OGUwOGEwMTI0OWZlZTE3NGNiMDU5N2Uyhd5Z2w==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNDllYjQ0YzVjNTY1YjQwOGI0MmFkMGRmZjAxYWKP0/mJ: 00:18:27.381 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.382 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.382 20:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.382 20:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.382 20:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.382 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.382 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:27.382 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:27.641 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:27.641 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.641 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:27.641 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:27.641 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:27.641 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.641 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:27.641 20:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.641 20:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.641 20:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.641 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.641 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.243 00:18:28.243 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.243 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.243 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.243 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.243 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.243 20:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.243 20:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.243 20:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.243 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.243 { 00:18:28.243 "cntlid": 47, 00:18:28.243 "qid": 0, 00:18:28.243 "state": "enabled", 00:18:28.243 "thread": "nvmf_tgt_poll_group_000", 00:18:28.243 "listen_address": { 00:18:28.243 "trtype": "TCP", 00:18:28.243 "adrfam": "IPv4", 00:18:28.243 "traddr": "10.0.0.2", 00:18:28.243 "trsvcid": "4420" 00:18:28.243 }, 00:18:28.243 "peer_address": { 00:18:28.243 "trtype": "TCP", 00:18:28.243 "adrfam": "IPv4", 00:18:28.243 "traddr": "10.0.0.1", 00:18:28.243 "trsvcid": "32958" 00:18:28.243 }, 00:18:28.243 "auth": { 00:18:28.243 "state": "completed", 00:18:28.243 "digest": "sha256", 00:18:28.243 "dhgroup": "ffdhe8192" 00:18:28.243 } 00:18:28.243 } 00:18:28.243 ]' 00:18:28.243 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.243 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.243 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.504 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.504 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.504 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.504 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.504 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.504 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.447 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.707 00:18:29.707 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.707 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.707 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.968 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.968 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.968 20:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.968 20:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.968 20:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.968 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.968 { 00:18:29.968 "cntlid": 49, 00:18:29.968 "qid": 0, 00:18:29.968 "state": "enabled", 00:18:29.968 "thread": "nvmf_tgt_poll_group_000", 00:18:29.968 "listen_address": { 00:18:29.968 "trtype": "TCP", 00:18:29.968 "adrfam": "IPv4", 00:18:29.968 "traddr": "10.0.0.2", 00:18:29.968 "trsvcid": "4420" 00:18:29.968 }, 00:18:29.968 "peer_address": { 00:18:29.968 "trtype": "TCP", 00:18:29.968 "adrfam": "IPv4", 00:18:29.968 "traddr": "10.0.0.1", 00:18:29.968 "trsvcid": "32990" 00:18:29.968 }, 00:18:29.968 "auth": { 00:18:29.968 "state": "completed", 00:18:29.968 "digest": "sha384", 00:18:29.968 "dhgroup": "null" 00:18:29.968 } 00:18:29.968 } 00:18:29.968 ]' 00:18:29.968 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.968 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.968 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.968 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:29.968 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.968 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.968 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.968 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.228 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:18:31.166 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.166 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.166 20:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.166 20:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.166 20:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.166 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.166 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:31.166 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:31.166 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:31.166 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.166 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.166 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:31.166 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:31.166 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.166 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.166 20:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.166 20:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.167 20:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.167 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.167 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.428 00:18:31.428 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.428 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.428 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.428 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.428 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.428 20:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.428 20:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.428 20:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.428 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.428 { 00:18:31.428 "cntlid": 51, 00:18:31.428 "qid": 0, 00:18:31.428 "state": "enabled", 00:18:31.428 "thread": "nvmf_tgt_poll_group_000", 00:18:31.428 "listen_address": { 00:18:31.428 "trtype": "TCP", 00:18:31.428 "adrfam": "IPv4", 00:18:31.428 "traddr": "10.0.0.2", 00:18:31.428 "trsvcid": "4420" 00:18:31.428 }, 00:18:31.428 "peer_address": { 00:18:31.428 "trtype": "TCP", 00:18:31.428 "adrfam": "IPv4", 00:18:31.428 "traddr": "10.0.0.1", 00:18:31.428 "trsvcid": "46662" 00:18:31.428 }, 00:18:31.428 "auth": { 00:18:31.428 "state": "completed", 00:18:31.428 "digest": "sha384", 00:18:31.428 "dhgroup": "null" 00:18:31.428 } 00:18:31.428 } 00:18:31.428 ]' 00:18:31.428 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.688 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.688 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.688 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:31.688 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.688 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.688 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.688 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.948 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MjhkNzNkN2MwMmZjMmZlYWQxMjRkYTI0MGI5ODNkOTZdoOAg: --dhchap-ctrl-secret DHHC-1:02:OWQ1YTM3ZTczMDNjMDUxNWE5ZGE0N2Y2M2ZlNzUwZWI1ODVkYmJmOWYzOTEwMzI1m8wtMA==: 00:18:32.517 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.517 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.517 20:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.517 20:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.517 20:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.517 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.517 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:32.517 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:32.777 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:32.777 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.777 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:32.777 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:32.777 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:32.777 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.777 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.778 20:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.778 20:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.778 20:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.778 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.778 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.037 00:18:33.037 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.037 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.037 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.307 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.307 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.307 20:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.307 20:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.307 20:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.307 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.307 { 00:18:33.307 "cntlid": 53, 00:18:33.307 "qid": 0, 00:18:33.307 "state": "enabled", 00:18:33.307 "thread": "nvmf_tgt_poll_group_000", 00:18:33.307 "listen_address": { 00:18:33.307 "trtype": "TCP", 00:18:33.307 "adrfam": "IPv4", 00:18:33.307 "traddr": "10.0.0.2", 00:18:33.307 "trsvcid": "4420" 00:18:33.307 }, 00:18:33.307 "peer_address": { 00:18:33.307 "trtype": "TCP", 00:18:33.307 "adrfam": "IPv4", 00:18:33.307 "traddr": "10.0.0.1", 00:18:33.307 "trsvcid": "46700" 00:18:33.307 }, 00:18:33.307 "auth": { 00:18:33.307 "state": "completed", 00:18:33.307 "digest": "sha384", 00:18:33.307 "dhgroup": "null" 00:18:33.307 } 00:18:33.307 } 00:18:33.307 ]' 00:18:33.307 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.307 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.307 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.307 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:33.307 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.307 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.307 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.307 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.567 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZWIzYWQ3MzdkZDdiMjA1OGY1OTQzN2E5OGUwOGEwMTI0OWZlZTE3NGNiMDU5N2Uyhd5Z2w==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNDllYjQ0YzVjNTY1YjQwOGI0MmFkMGRmZjAxYWKP0/mJ: 00:18:34.138 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.138 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.138 20:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.138 20:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.138 20:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.138 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.138 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:34.138 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:34.399 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:34.399 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.399 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.399 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:34.399 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:34.399 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.399 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:34.399 20:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.399 20:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.399 20:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.399 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.399 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.660 00:18:34.660 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.660 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.660 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.660 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.660 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.660 20:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.660 20:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.922 20:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.922 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.922 { 00:18:34.922 "cntlid": 55, 00:18:34.922 "qid": 0, 00:18:34.922 "state": "enabled", 00:18:34.922 "thread": "nvmf_tgt_poll_group_000", 00:18:34.922 "listen_address": { 00:18:34.922 "trtype": "TCP", 00:18:34.922 "adrfam": "IPv4", 00:18:34.922 "traddr": "10.0.0.2", 00:18:34.922 "trsvcid": "4420" 00:18:34.922 }, 00:18:34.922 "peer_address": { 00:18:34.922 "trtype": "TCP", 00:18:34.922 "adrfam": "IPv4", 00:18:34.922 "traddr": "10.0.0.1", 00:18:34.922 "trsvcid": "46726" 00:18:34.922 }, 00:18:34.922 "auth": { 00:18:34.922 "state": "completed", 00:18:34.922 "digest": "sha384", 00:18:34.922 "dhgroup": "null" 00:18:34.922 } 00:18:34.922 } 00:18:34.922 ]' 00:18:34.922 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.922 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.922 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.922 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:34.922 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.922 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.922 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.922 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.184 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:18:35.753 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.753 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.753 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.753 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.753 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.753 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.753 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.753 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:35.753 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:36.016 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:36.016 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.016 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.016 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:36.016 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:36.016 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.016 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.016 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.017 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.017 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.017 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.017 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.277 00:18:36.277 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.277 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.277 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.277 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.277 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.277 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.277 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.537 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.537 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.537 { 00:18:36.537 "cntlid": 57, 00:18:36.537 "qid": 0, 00:18:36.537 "state": "enabled", 00:18:36.537 "thread": "nvmf_tgt_poll_group_000", 00:18:36.537 "listen_address": { 00:18:36.537 "trtype": "TCP", 00:18:36.537 "adrfam": "IPv4", 00:18:36.537 "traddr": "10.0.0.2", 00:18:36.537 "trsvcid": "4420" 00:18:36.537 }, 00:18:36.537 "peer_address": { 00:18:36.537 "trtype": "TCP", 00:18:36.537 "adrfam": "IPv4", 00:18:36.537 "traddr": "10.0.0.1", 00:18:36.537 "trsvcid": "46762" 00:18:36.537 }, 00:18:36.537 "auth": { 00:18:36.537 "state": "completed", 00:18:36.537 "digest": "sha384", 00:18:36.537 "dhgroup": "ffdhe2048" 00:18:36.537 } 00:18:36.537 } 00:18:36.537 ]' 00:18:36.537 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.537 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.537 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.537 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:36.537 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.537 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.537 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.537 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.798 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:18:37.370 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.370 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.370 20:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.370 20:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.370 20:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.370 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.370 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:37.370 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:37.631 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:37.631 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.631 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:37.631 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:37.631 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:37.631 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.631 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.631 20:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.631 20:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.631 20:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.631 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.631 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.891 00:18:37.891 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.891 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.891 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.891 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.891 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.891 20:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.891 20:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.891 20:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.891 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.891 { 00:18:37.891 "cntlid": 59, 00:18:37.891 "qid": 0, 00:18:37.891 "state": "enabled", 00:18:37.891 "thread": "nvmf_tgt_poll_group_000", 00:18:37.891 "listen_address": { 00:18:37.891 "trtype": "TCP", 00:18:37.891 "adrfam": "IPv4", 00:18:37.891 "traddr": "10.0.0.2", 00:18:37.891 "trsvcid": "4420" 00:18:37.891 }, 00:18:37.891 "peer_address": { 00:18:37.891 "trtype": "TCP", 00:18:37.891 "adrfam": "IPv4", 00:18:37.891 "traddr": "10.0.0.1", 00:18:37.891 "trsvcid": "46796" 00:18:37.891 }, 00:18:37.891 "auth": { 00:18:37.891 "state": "completed", 00:18:37.891 "digest": "sha384", 00:18:37.891 "dhgroup": "ffdhe2048" 00:18:37.891 } 00:18:37.891 } 00:18:37.891 ]' 00:18:37.891 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.891 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.152 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.152 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:38.152 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.152 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.152 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.152 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.152 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MjhkNzNkN2MwMmZjMmZlYWQxMjRkYTI0MGI5ODNkOTZdoOAg: --dhchap-ctrl-secret DHHC-1:02:OWQ1YTM3ZTczMDNjMDUxNWE5ZGE0N2Y2M2ZlNzUwZWI1ODVkYmJmOWYzOTEwMzI1m8wtMA==: 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.094 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.355 00:18:39.355 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.355 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.355 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.615 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.615 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.615 20:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.615 20:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.615 20:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.615 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.615 { 00:18:39.615 "cntlid": 61, 00:18:39.615 "qid": 0, 00:18:39.615 "state": "enabled", 00:18:39.615 "thread": "nvmf_tgt_poll_group_000", 00:18:39.615 "listen_address": { 00:18:39.615 "trtype": "TCP", 00:18:39.615 "adrfam": "IPv4", 00:18:39.615 "traddr": "10.0.0.2", 00:18:39.615 "trsvcid": "4420" 00:18:39.615 }, 00:18:39.615 "peer_address": { 00:18:39.615 "trtype": "TCP", 00:18:39.615 "adrfam": "IPv4", 00:18:39.615 "traddr": "10.0.0.1", 00:18:39.615 "trsvcid": "46812" 00:18:39.615 }, 00:18:39.615 "auth": { 00:18:39.615 "state": "completed", 00:18:39.615 "digest": "sha384", 00:18:39.615 "dhgroup": "ffdhe2048" 00:18:39.615 } 00:18:39.615 } 00:18:39.615 ]' 00:18:39.615 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.615 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.615 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.615 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:39.615 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.615 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.615 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.615 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.876 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZWIzYWQ3MzdkZDdiMjA1OGY1OTQzN2E5OGUwOGEwMTI0OWZlZTE3NGNiMDU5N2Uyhd5Z2w==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNDllYjQ0YzVjNTY1YjQwOGI0MmFkMGRmZjAxYWKP0/mJ: 00:18:40.819 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.819 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.819 20:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.819 20:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.819 20:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.819 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.819 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:40.819 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:40.819 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:40.819 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.819 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.819 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:40.819 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:40.819 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.819 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:40.819 20:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.819 20:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.819 20:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.819 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.819 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.080 00:18:41.080 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.080 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.080 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.080 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.080 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.080 20:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.080 20:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.341 20:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.341 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.341 { 00:18:41.341 "cntlid": 63, 00:18:41.341 "qid": 0, 00:18:41.341 "state": "enabled", 00:18:41.341 "thread": "nvmf_tgt_poll_group_000", 00:18:41.341 "listen_address": { 00:18:41.341 "trtype": "TCP", 00:18:41.341 "adrfam": "IPv4", 00:18:41.341 "traddr": "10.0.0.2", 00:18:41.341 "trsvcid": "4420" 00:18:41.341 }, 00:18:41.341 "peer_address": { 00:18:41.341 "trtype": "TCP", 00:18:41.341 "adrfam": "IPv4", 00:18:41.341 "traddr": "10.0.0.1", 00:18:41.341 "trsvcid": "39956" 00:18:41.341 }, 00:18:41.341 "auth": { 00:18:41.341 "state": "completed", 00:18:41.341 "digest": "sha384", 00:18:41.341 "dhgroup": "ffdhe2048" 00:18:41.341 } 00:18:41.341 } 00:18:41.341 ]' 00:18:41.341 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.341 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.341 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.341 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:41.341 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.341 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.341 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.341 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.602 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:18:42.171 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.171 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.171 20:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.171 20:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.171 20:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.171 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.171 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.171 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:42.171 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:42.430 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:42.430 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.430 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.430 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:42.430 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:42.430 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.430 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.430 20:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.430 20:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.430 20:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.430 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.430 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.689 00:18:42.689 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.689 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.689 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.976 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.976 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.976 20:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.976 20:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.976 20:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.976 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.976 { 00:18:42.976 "cntlid": 65, 00:18:42.976 "qid": 0, 00:18:42.976 "state": "enabled", 00:18:42.976 "thread": "nvmf_tgt_poll_group_000", 00:18:42.976 "listen_address": { 00:18:42.976 "trtype": "TCP", 00:18:42.976 "adrfam": "IPv4", 00:18:42.976 "traddr": "10.0.0.2", 00:18:42.976 "trsvcid": "4420" 00:18:42.976 }, 00:18:42.976 "peer_address": { 00:18:42.976 "trtype": "TCP", 00:18:42.976 "adrfam": "IPv4", 00:18:42.976 "traddr": "10.0.0.1", 00:18:42.976 "trsvcid": "39996" 00:18:42.976 }, 00:18:42.976 "auth": { 00:18:42.976 "state": "completed", 00:18:42.976 "digest": "sha384", 00:18:42.976 "dhgroup": "ffdhe3072" 00:18:42.976 } 00:18:42.976 } 00:18:42.976 ]' 00:18:42.976 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.976 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.976 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.976 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:42.976 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.976 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.976 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.976 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.237 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:18:43.807 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.807 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.807 20:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.807 20:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.807 20:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.807 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.807 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:43.807 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:44.069 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:44.069 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.069 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.069 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:44.069 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:44.069 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.069 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.069 20:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.069 20:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.069 20:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.069 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.069 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.329 00:18:44.329 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.329 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.330 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.590 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.590 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.590 20:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.590 20:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.590 20:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.590 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.590 { 00:18:44.590 "cntlid": 67, 00:18:44.590 "qid": 0, 00:18:44.590 "state": "enabled", 00:18:44.590 "thread": "nvmf_tgt_poll_group_000", 00:18:44.590 "listen_address": { 00:18:44.590 "trtype": "TCP", 00:18:44.590 "adrfam": "IPv4", 00:18:44.590 "traddr": "10.0.0.2", 00:18:44.590 "trsvcid": "4420" 00:18:44.590 }, 00:18:44.590 "peer_address": { 00:18:44.590 "trtype": "TCP", 00:18:44.590 "adrfam": "IPv4", 00:18:44.590 "traddr": "10.0.0.1", 00:18:44.590 "trsvcid": "40040" 00:18:44.590 }, 00:18:44.590 "auth": { 00:18:44.590 "state": "completed", 00:18:44.590 "digest": "sha384", 00:18:44.590 "dhgroup": "ffdhe3072" 00:18:44.590 } 00:18:44.590 } 00:18:44.590 ]' 00:18:44.590 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.590 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.590 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.590 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:44.590 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.590 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.590 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.590 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.850 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MjhkNzNkN2MwMmZjMmZlYWQxMjRkYTI0MGI5ODNkOTZdoOAg: --dhchap-ctrl-secret DHHC-1:02:OWQ1YTM3ZTczMDNjMDUxNWE5ZGE0N2Y2M2ZlNzUwZWI1ODVkYmJmOWYzOTEwMzI1m8wtMA==: 00:18:45.423 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.423 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.423 20:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.423 20:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.423 20:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.423 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.423 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:45.423 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:45.684 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:45.684 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.684 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.684 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:45.684 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:45.684 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.684 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.684 20:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.684 20:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.684 20:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.684 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.684 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.944 00:18:45.944 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.944 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.944 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.944 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.944 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.944 20:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.944 20:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.205 20:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.205 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.205 { 00:18:46.205 "cntlid": 69, 00:18:46.205 "qid": 0, 00:18:46.205 "state": "enabled", 00:18:46.205 "thread": "nvmf_tgt_poll_group_000", 00:18:46.205 "listen_address": { 00:18:46.205 "trtype": "TCP", 00:18:46.205 "adrfam": "IPv4", 00:18:46.205 "traddr": "10.0.0.2", 00:18:46.205 "trsvcid": "4420" 00:18:46.205 }, 00:18:46.205 "peer_address": { 00:18:46.205 "trtype": "TCP", 00:18:46.205 "adrfam": "IPv4", 00:18:46.205 "traddr": "10.0.0.1", 00:18:46.205 "trsvcid": "40074" 00:18:46.205 }, 00:18:46.205 "auth": { 00:18:46.205 "state": "completed", 00:18:46.205 "digest": "sha384", 00:18:46.205 "dhgroup": "ffdhe3072" 00:18:46.205 } 00:18:46.205 } 00:18:46.205 ]' 00:18:46.205 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.205 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.205 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.205 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:46.205 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.205 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.205 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.205 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.466 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZWIzYWQ3MzdkZDdiMjA1OGY1OTQzN2E5OGUwOGEwMTI0OWZlZTE3NGNiMDU5N2Uyhd5Z2w==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNDllYjQ0YzVjNTY1YjQwOGI0MmFkMGRmZjAxYWKP0/mJ: 00:18:47.036 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.036 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.036 20:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.036 20:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.036 20:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.036 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.036 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:47.036 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:47.297 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:47.297 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.297 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:47.297 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:47.297 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:47.297 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.297 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:47.297 20:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.297 20:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.297 20:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.297 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.297 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.558 00:18:47.558 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.558 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.558 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.819 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.819 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.819 20:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.819 20:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.819 20:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.819 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.819 { 00:18:47.819 "cntlid": 71, 00:18:47.819 "qid": 0, 00:18:47.819 "state": "enabled", 00:18:47.819 "thread": "nvmf_tgt_poll_group_000", 00:18:47.819 "listen_address": { 00:18:47.819 "trtype": "TCP", 00:18:47.819 "adrfam": "IPv4", 00:18:47.819 "traddr": "10.0.0.2", 00:18:47.819 "trsvcid": "4420" 00:18:47.819 }, 00:18:47.819 "peer_address": { 00:18:47.819 "trtype": "TCP", 00:18:47.819 "adrfam": "IPv4", 00:18:47.819 "traddr": "10.0.0.1", 00:18:47.819 "trsvcid": "40096" 00:18:47.819 }, 00:18:47.819 "auth": { 00:18:47.819 "state": "completed", 00:18:47.819 "digest": "sha384", 00:18:47.819 "dhgroup": "ffdhe3072" 00:18:47.819 } 00:18:47.819 } 00:18:47.819 ]' 00:18:47.819 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.819 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.819 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.819 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:47.819 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.819 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.819 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.819 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.080 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.023 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.284 00:18:49.284 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.284 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.284 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.547 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.547 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.547 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.547 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.547 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.547 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.547 { 00:18:49.547 "cntlid": 73, 00:18:49.547 "qid": 0, 00:18:49.547 "state": "enabled", 00:18:49.547 "thread": "nvmf_tgt_poll_group_000", 00:18:49.547 "listen_address": { 00:18:49.547 "trtype": "TCP", 00:18:49.547 "adrfam": "IPv4", 00:18:49.547 "traddr": "10.0.0.2", 00:18:49.547 "trsvcid": "4420" 00:18:49.547 }, 00:18:49.547 "peer_address": { 00:18:49.547 "trtype": "TCP", 00:18:49.547 "adrfam": "IPv4", 00:18:49.547 "traddr": "10.0.0.1", 00:18:49.547 "trsvcid": "40122" 00:18:49.547 }, 00:18:49.547 "auth": { 00:18:49.547 "state": "completed", 00:18:49.547 "digest": "sha384", 00:18:49.547 "dhgroup": "ffdhe4096" 00:18:49.547 } 00:18:49.547 } 00:18:49.547 ]' 00:18:49.547 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.547 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.547 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.547 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.547 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.547 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.547 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.547 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.808 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:18:50.379 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.640 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.640 20:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.640 20:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.640 20:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.640 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.640 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:50.640 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:50.640 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:50.640 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.640 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:50.640 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:50.640 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:50.640 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.640 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.640 20:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.640 20:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.640 20:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.640 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.640 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.900 00:18:50.900 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.900 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.900 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.183 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.183 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.183 20:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.183 20:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.183 20:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.183 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.183 { 00:18:51.183 "cntlid": 75, 00:18:51.183 "qid": 0, 00:18:51.183 "state": "enabled", 00:18:51.183 "thread": "nvmf_tgt_poll_group_000", 00:18:51.183 "listen_address": { 00:18:51.183 "trtype": "TCP", 00:18:51.183 "adrfam": "IPv4", 00:18:51.183 "traddr": "10.0.0.2", 00:18:51.183 "trsvcid": "4420" 00:18:51.183 }, 00:18:51.183 "peer_address": { 00:18:51.183 "trtype": "TCP", 00:18:51.183 "adrfam": "IPv4", 00:18:51.183 "traddr": "10.0.0.1", 00:18:51.183 "trsvcid": "41870" 00:18:51.183 }, 00:18:51.183 "auth": { 00:18:51.183 "state": "completed", 00:18:51.183 "digest": "sha384", 00:18:51.183 "dhgroup": "ffdhe4096" 00:18:51.183 } 00:18:51.183 } 00:18:51.183 ]' 00:18:51.183 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.184 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.184 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.184 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.184 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.184 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.184 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.184 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.445 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MjhkNzNkN2MwMmZjMmZlYWQxMjRkYTI0MGI5ODNkOTZdoOAg: --dhchap-ctrl-secret DHHC-1:02:OWQ1YTM3ZTczMDNjMDUxNWE5ZGE0N2Y2M2ZlNzUwZWI1ODVkYmJmOWYzOTEwMzI1m8wtMA==: 00:18:52.387 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.387 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.387 20:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.387 20:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.387 20:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.387 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.387 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:52.387 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:52.388 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:52.388 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.388 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.388 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:52.388 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:52.388 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.388 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.388 20:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.388 20:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.388 20:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.388 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.388 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.648 00:18:52.648 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.648 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.648 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.909 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.909 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.909 20:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.909 20:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.909 20:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.909 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.909 { 00:18:52.909 "cntlid": 77, 00:18:52.909 "qid": 0, 00:18:52.909 "state": "enabled", 00:18:52.909 "thread": "nvmf_tgt_poll_group_000", 00:18:52.909 "listen_address": { 00:18:52.909 "trtype": "TCP", 00:18:52.909 "adrfam": "IPv4", 00:18:52.909 "traddr": "10.0.0.2", 00:18:52.909 "trsvcid": "4420" 00:18:52.909 }, 00:18:52.909 "peer_address": { 00:18:52.909 "trtype": "TCP", 00:18:52.909 "adrfam": "IPv4", 00:18:52.909 "traddr": "10.0.0.1", 00:18:52.909 "trsvcid": "41896" 00:18:52.909 }, 00:18:52.909 "auth": { 00:18:52.909 "state": "completed", 00:18:52.909 "digest": "sha384", 00:18:52.909 "dhgroup": "ffdhe4096" 00:18:52.909 } 00:18:52.909 } 00:18:52.909 ]' 00:18:52.909 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.909 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.909 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.909 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:52.909 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.909 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.909 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.909 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.169 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZWIzYWQ3MzdkZDdiMjA1OGY1OTQzN2E5OGUwOGEwMTI0OWZlZTE3NGNiMDU5N2Uyhd5Z2w==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNDllYjQ0YzVjNTY1YjQwOGI0MmFkMGRmZjAxYWKP0/mJ: 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.112 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.372 00:18:54.372 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.372 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.372 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.372 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.372 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.372 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.372 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.633 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.633 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.633 { 00:18:54.633 "cntlid": 79, 00:18:54.633 "qid": 0, 00:18:54.633 "state": "enabled", 00:18:54.633 "thread": "nvmf_tgt_poll_group_000", 00:18:54.633 "listen_address": { 00:18:54.633 "trtype": "TCP", 00:18:54.633 "adrfam": "IPv4", 00:18:54.633 "traddr": "10.0.0.2", 00:18:54.633 "trsvcid": "4420" 00:18:54.633 }, 00:18:54.633 "peer_address": { 00:18:54.633 "trtype": "TCP", 00:18:54.633 "adrfam": "IPv4", 00:18:54.633 "traddr": "10.0.0.1", 00:18:54.633 "trsvcid": "41910" 00:18:54.633 }, 00:18:54.633 "auth": { 00:18:54.633 "state": "completed", 00:18:54.633 "digest": "sha384", 00:18:54.633 "dhgroup": "ffdhe4096" 00:18:54.633 } 00:18:54.633 } 00:18:54.633 ]' 00:18:54.633 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.633 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.633 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.633 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:54.633 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.633 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.633 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.633 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.892 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:18:55.461 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.461 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.461 20:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.461 20:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.461 20:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.461 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.461 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.461 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:55.461 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:55.720 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:55.720 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.720 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:55.720 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:55.720 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:55.720 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.720 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.720 20:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.721 20:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.721 20:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.721 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.721 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.980 00:18:56.240 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.240 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.240 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.240 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.240 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.240 20:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.240 20:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.240 20:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.240 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.240 { 00:18:56.240 "cntlid": 81, 00:18:56.240 "qid": 0, 00:18:56.240 "state": "enabled", 00:18:56.240 "thread": "nvmf_tgt_poll_group_000", 00:18:56.240 "listen_address": { 00:18:56.240 "trtype": "TCP", 00:18:56.240 "adrfam": "IPv4", 00:18:56.240 "traddr": "10.0.0.2", 00:18:56.240 "trsvcid": "4420" 00:18:56.240 }, 00:18:56.240 "peer_address": { 00:18:56.240 "trtype": "TCP", 00:18:56.240 "adrfam": "IPv4", 00:18:56.240 "traddr": "10.0.0.1", 00:18:56.240 "trsvcid": "41938" 00:18:56.240 }, 00:18:56.240 "auth": { 00:18:56.240 "state": "completed", 00:18:56.240 "digest": "sha384", 00:18:56.240 "dhgroup": "ffdhe6144" 00:18:56.240 } 00:18:56.240 } 00:18:56.240 ]' 00:18:56.240 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.240 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.240 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.500 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:56.500 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.500 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.500 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.500 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.500 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.441 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.442 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.731 00:18:57.990 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.990 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.990 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.990 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.990 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.990 20:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.990 20:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.990 20:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.990 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.990 { 00:18:57.990 "cntlid": 83, 00:18:57.990 "qid": 0, 00:18:57.990 "state": "enabled", 00:18:57.990 "thread": "nvmf_tgt_poll_group_000", 00:18:57.990 "listen_address": { 00:18:57.990 "trtype": "TCP", 00:18:57.990 "adrfam": "IPv4", 00:18:57.990 "traddr": "10.0.0.2", 00:18:57.990 "trsvcid": "4420" 00:18:57.990 }, 00:18:57.990 "peer_address": { 00:18:57.990 "trtype": "TCP", 00:18:57.990 "adrfam": "IPv4", 00:18:57.990 "traddr": "10.0.0.1", 00:18:57.990 "trsvcid": "41960" 00:18:57.990 }, 00:18:57.990 "auth": { 00:18:57.990 "state": "completed", 00:18:57.990 "digest": "sha384", 00:18:57.990 "dhgroup": "ffdhe6144" 00:18:57.990 } 00:18:57.990 } 00:18:57.990 ]' 00:18:57.990 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.990 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.990 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.251 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:58.251 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.251 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.251 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.251 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.251 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MjhkNzNkN2MwMmZjMmZlYWQxMjRkYTI0MGI5ODNkOTZdoOAg: --dhchap-ctrl-secret DHHC-1:02:OWQ1YTM3ZTczMDNjMDUxNWE5ZGE0N2Y2M2ZlNzUwZWI1ODVkYmJmOWYzOTEwMzI1m8wtMA==: 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.194 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.762 00:18:59.762 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.762 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.762 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.762 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.762 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.762 20:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.762 20:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.762 20:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.762 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.762 { 00:18:59.762 "cntlid": 85, 00:18:59.762 "qid": 0, 00:18:59.762 "state": "enabled", 00:18:59.762 "thread": "nvmf_tgt_poll_group_000", 00:18:59.762 "listen_address": { 00:18:59.762 "trtype": "TCP", 00:18:59.762 "adrfam": "IPv4", 00:18:59.762 "traddr": "10.0.0.2", 00:18:59.762 "trsvcid": "4420" 00:18:59.762 }, 00:18:59.762 "peer_address": { 00:18:59.762 "trtype": "TCP", 00:18:59.762 "adrfam": "IPv4", 00:18:59.762 "traddr": "10.0.0.1", 00:18:59.762 "trsvcid": "41982" 00:18:59.762 }, 00:18:59.762 "auth": { 00:18:59.762 "state": "completed", 00:18:59.762 "digest": "sha384", 00:18:59.762 "dhgroup": "ffdhe6144" 00:18:59.762 } 00:18:59.762 } 00:18:59.762 ]' 00:18:59.762 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.762 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.762 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.762 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:59.762 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.021 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.021 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.021 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.021 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZWIzYWQ3MzdkZDdiMjA1OGY1OTQzN2E5OGUwOGEwMTI0OWZlZTE3NGNiMDU5N2Uyhd5Z2w==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNDllYjQ0YzVjNTY1YjQwOGI0MmFkMGRmZjAxYWKP0/mJ: 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.013 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.272 00:19:01.272 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.272 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.272 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.531 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.531 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.531 20:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.531 20:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.531 20:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.531 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.531 { 00:19:01.531 "cntlid": 87, 00:19:01.531 "qid": 0, 00:19:01.531 "state": "enabled", 00:19:01.531 "thread": "nvmf_tgt_poll_group_000", 00:19:01.531 "listen_address": { 00:19:01.531 "trtype": "TCP", 00:19:01.531 "adrfam": "IPv4", 00:19:01.531 "traddr": "10.0.0.2", 00:19:01.531 "trsvcid": "4420" 00:19:01.531 }, 00:19:01.531 "peer_address": { 00:19:01.531 "trtype": "TCP", 00:19:01.531 "adrfam": "IPv4", 00:19:01.531 "traddr": "10.0.0.1", 00:19:01.531 "trsvcid": "59456" 00:19:01.531 }, 00:19:01.531 "auth": { 00:19:01.531 "state": "completed", 00:19:01.531 "digest": "sha384", 00:19:01.531 "dhgroup": "ffdhe6144" 00:19:01.531 } 00:19:01.531 } 00:19:01.531 ]' 00:19:01.531 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.531 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.531 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.531 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:01.531 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.789 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.789 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.789 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.789 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:19:02.728 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.728 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.728 20:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.728 20:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.728 20:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.728 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.728 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.728 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:02.728 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:02.728 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:02.728 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.728 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:02.728 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:02.728 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:02.728 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.728 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.728 20:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.728 20:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.728 20:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.728 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.728 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.300 00:19:03.300 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.300 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.300 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.300 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.300 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.300 20:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.300 20:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.561 20:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.561 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.561 { 00:19:03.561 "cntlid": 89, 00:19:03.561 "qid": 0, 00:19:03.561 "state": "enabled", 00:19:03.561 "thread": "nvmf_tgt_poll_group_000", 00:19:03.561 "listen_address": { 00:19:03.561 "trtype": "TCP", 00:19:03.561 "adrfam": "IPv4", 00:19:03.561 "traddr": "10.0.0.2", 00:19:03.561 "trsvcid": "4420" 00:19:03.561 }, 00:19:03.561 "peer_address": { 00:19:03.561 "trtype": "TCP", 00:19:03.561 "adrfam": "IPv4", 00:19:03.561 "traddr": "10.0.0.1", 00:19:03.561 "trsvcid": "59492" 00:19:03.561 }, 00:19:03.561 "auth": { 00:19:03.561 "state": "completed", 00:19:03.561 "digest": "sha384", 00:19:03.561 "dhgroup": "ffdhe8192" 00:19:03.561 } 00:19:03.561 } 00:19:03.561 ]' 00:19:03.561 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.561 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.561 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.561 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:03.561 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.561 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.561 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.561 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.821 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:19:04.393 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.393 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.393 20:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.393 20:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.393 20:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.393 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.393 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:04.393 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:04.653 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:04.653 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.653 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:04.653 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:04.653 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:04.653 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.653 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.653 20:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.653 20:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.653 20:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.654 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.654 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.225 00:19:05.225 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.225 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.225 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.225 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.225 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.225 20:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.225 20:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.225 20:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.225 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.225 { 00:19:05.225 "cntlid": 91, 00:19:05.225 "qid": 0, 00:19:05.225 "state": "enabled", 00:19:05.225 "thread": "nvmf_tgt_poll_group_000", 00:19:05.225 "listen_address": { 00:19:05.225 "trtype": "TCP", 00:19:05.225 "adrfam": "IPv4", 00:19:05.225 "traddr": "10.0.0.2", 00:19:05.225 "trsvcid": "4420" 00:19:05.226 }, 00:19:05.226 "peer_address": { 00:19:05.226 "trtype": "TCP", 00:19:05.226 "adrfam": "IPv4", 00:19:05.226 "traddr": "10.0.0.1", 00:19:05.226 "trsvcid": "59510" 00:19:05.226 }, 00:19:05.226 "auth": { 00:19:05.226 "state": "completed", 00:19:05.226 "digest": "sha384", 00:19:05.226 "dhgroup": "ffdhe8192" 00:19:05.226 } 00:19:05.226 } 00:19:05.226 ]' 00:19:05.226 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.487 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:05.487 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.487 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.487 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.487 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.487 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.487 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.758 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MjhkNzNkN2MwMmZjMmZlYWQxMjRkYTI0MGI5ODNkOTZdoOAg: --dhchap-ctrl-secret DHHC-1:02:OWQ1YTM3ZTczMDNjMDUxNWE5ZGE0N2Y2M2ZlNzUwZWI1ODVkYmJmOWYzOTEwMzI1m8wtMA==: 00:19:06.335 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.335 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.335 20:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.335 20:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.335 20:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.335 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.335 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:06.335 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:06.595 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:06.595 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.595 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:06.595 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:06.595 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:06.595 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.595 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.595 20:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.595 20:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.595 20:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.595 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.595 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.167 00:19:07.167 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.167 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.167 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.167 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.167 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.167 20:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.167 20:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.167 20:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.167 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.167 { 00:19:07.167 "cntlid": 93, 00:19:07.167 "qid": 0, 00:19:07.167 "state": "enabled", 00:19:07.167 "thread": "nvmf_tgt_poll_group_000", 00:19:07.167 "listen_address": { 00:19:07.167 "trtype": "TCP", 00:19:07.167 "adrfam": "IPv4", 00:19:07.167 "traddr": "10.0.0.2", 00:19:07.167 "trsvcid": "4420" 00:19:07.167 }, 00:19:07.167 "peer_address": { 00:19:07.167 "trtype": "TCP", 00:19:07.167 "adrfam": "IPv4", 00:19:07.167 "traddr": "10.0.0.1", 00:19:07.167 "trsvcid": "59526" 00:19:07.167 }, 00:19:07.167 "auth": { 00:19:07.167 "state": "completed", 00:19:07.167 "digest": "sha384", 00:19:07.167 "dhgroup": "ffdhe8192" 00:19:07.167 } 00:19:07.167 } 00:19:07.167 ]' 00:19:07.167 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.428 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:07.428 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.428 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:07.428 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.428 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.428 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.428 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.689 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZWIzYWQ3MzdkZDdiMjA1OGY1OTQzN2E5OGUwOGEwMTI0OWZlZTE3NGNiMDU5N2Uyhd5Z2w==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNDllYjQ0YzVjNTY1YjQwOGI0MmFkMGRmZjAxYWKP0/mJ: 00:19:08.259 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.259 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.259 20:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.259 20:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.259 20:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.259 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.259 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:08.259 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:08.520 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:08.520 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.520 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:08.520 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:08.520 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:08.520 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.520 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:08.520 20:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.520 20:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.520 20:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.520 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.520 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.091 00:19:09.091 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.091 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.091 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.091 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.091 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.091 20:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.091 20:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.091 20:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.091 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.091 { 00:19:09.091 "cntlid": 95, 00:19:09.091 "qid": 0, 00:19:09.091 "state": "enabled", 00:19:09.091 "thread": "nvmf_tgt_poll_group_000", 00:19:09.091 "listen_address": { 00:19:09.091 "trtype": "TCP", 00:19:09.091 "adrfam": "IPv4", 00:19:09.091 "traddr": "10.0.0.2", 00:19:09.091 "trsvcid": "4420" 00:19:09.091 }, 00:19:09.091 "peer_address": { 00:19:09.091 "trtype": "TCP", 00:19:09.091 "adrfam": "IPv4", 00:19:09.091 "traddr": "10.0.0.1", 00:19:09.091 "trsvcid": "59558" 00:19:09.091 }, 00:19:09.091 "auth": { 00:19:09.091 "state": "completed", 00:19:09.091 "digest": "sha384", 00:19:09.091 "dhgroup": "ffdhe8192" 00:19:09.091 } 00:19:09.091 } 00:19:09.091 ]' 00:19:09.091 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.091 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:09.091 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.351 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:09.351 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.351 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.351 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.351 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.351 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.292 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.552 00:19:10.552 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.552 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.552 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.813 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.813 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.813 20:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.813 20:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.813 20:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.813 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.813 { 00:19:10.813 "cntlid": 97, 00:19:10.813 "qid": 0, 00:19:10.813 "state": "enabled", 00:19:10.813 "thread": "nvmf_tgt_poll_group_000", 00:19:10.813 "listen_address": { 00:19:10.813 "trtype": "TCP", 00:19:10.813 "adrfam": "IPv4", 00:19:10.813 "traddr": "10.0.0.2", 00:19:10.813 "trsvcid": "4420" 00:19:10.813 }, 00:19:10.813 "peer_address": { 00:19:10.813 "trtype": "TCP", 00:19:10.813 "adrfam": "IPv4", 00:19:10.813 "traddr": "10.0.0.1", 00:19:10.813 "trsvcid": "40136" 00:19:10.813 }, 00:19:10.813 "auth": { 00:19:10.813 "state": "completed", 00:19:10.813 "digest": "sha512", 00:19:10.813 "dhgroup": "null" 00:19:10.813 } 00:19:10.813 } 00:19:10.813 ]' 00:19:10.813 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.813 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.813 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.813 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:10.813 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.813 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.813 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.813 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.073 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:19:11.643 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.904 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.169 00:19:12.169 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.169 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.169 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.481 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.481 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.481 20:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.481 20:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.481 20:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.481 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.481 { 00:19:12.481 "cntlid": 99, 00:19:12.481 "qid": 0, 00:19:12.481 "state": "enabled", 00:19:12.481 "thread": "nvmf_tgt_poll_group_000", 00:19:12.481 "listen_address": { 00:19:12.481 "trtype": "TCP", 00:19:12.481 "adrfam": "IPv4", 00:19:12.481 "traddr": "10.0.0.2", 00:19:12.481 "trsvcid": "4420" 00:19:12.481 }, 00:19:12.481 "peer_address": { 00:19:12.481 "trtype": "TCP", 00:19:12.481 "adrfam": "IPv4", 00:19:12.481 "traddr": "10.0.0.1", 00:19:12.481 "trsvcid": "40160" 00:19:12.481 }, 00:19:12.481 "auth": { 00:19:12.481 "state": "completed", 00:19:12.481 "digest": "sha512", 00:19:12.481 "dhgroup": "null" 00:19:12.481 } 00:19:12.481 } 00:19:12.481 ]' 00:19:12.481 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.481 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.481 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.481 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:12.481 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.481 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.481 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.481 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.741 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MjhkNzNkN2MwMmZjMmZlYWQxMjRkYTI0MGI5ODNkOTZdoOAg: --dhchap-ctrl-secret DHHC-1:02:OWQ1YTM3ZTczMDNjMDUxNWE5ZGE0N2Y2M2ZlNzUwZWI1ODVkYmJmOWYzOTEwMzI1m8wtMA==: 00:19:13.357 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.357 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.357 20:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.357 20:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.357 20:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.357 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.357 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:13.357 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:13.616 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:13.616 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.616 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:13.616 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:13.616 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:13.616 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.617 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.617 20:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.617 20:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.617 20:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.617 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.617 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.876 00:19:13.877 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.877 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.877 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.138 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.138 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.138 20:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.138 20:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.138 20:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.138 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.138 { 00:19:14.138 "cntlid": 101, 00:19:14.138 "qid": 0, 00:19:14.138 "state": "enabled", 00:19:14.138 "thread": "nvmf_tgt_poll_group_000", 00:19:14.138 "listen_address": { 00:19:14.138 "trtype": "TCP", 00:19:14.138 "adrfam": "IPv4", 00:19:14.138 "traddr": "10.0.0.2", 00:19:14.138 "trsvcid": "4420" 00:19:14.138 }, 00:19:14.138 "peer_address": { 00:19:14.138 "trtype": "TCP", 00:19:14.138 "adrfam": "IPv4", 00:19:14.138 "traddr": "10.0.0.1", 00:19:14.138 "trsvcid": "40200" 00:19:14.138 }, 00:19:14.138 "auth": { 00:19:14.138 "state": "completed", 00:19:14.138 "digest": "sha512", 00:19:14.138 "dhgroup": "null" 00:19:14.138 } 00:19:14.138 } 00:19:14.138 ]' 00:19:14.138 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.138 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.138 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.138 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:14.138 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.138 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.138 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.138 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.398 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZWIzYWQ3MzdkZDdiMjA1OGY1OTQzN2E5OGUwOGEwMTI0OWZlZTE3NGNiMDU5N2Uyhd5Z2w==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNDllYjQ0YzVjNTY1YjQwOGI0MmFkMGRmZjAxYWKP0/mJ: 00:19:14.969 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.969 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.969 20:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.969 20:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.969 20:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.969 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.969 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:14.969 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:15.231 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:15.231 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.231 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.231 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:15.231 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:15.231 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.231 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:15.231 20:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.231 20:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.231 20:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.231 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.231 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.492 00:19:15.492 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.492 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.492 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.753 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.753 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.753 20:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.753 20:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.753 20:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.753 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.753 { 00:19:15.753 "cntlid": 103, 00:19:15.753 "qid": 0, 00:19:15.753 "state": "enabled", 00:19:15.753 "thread": "nvmf_tgt_poll_group_000", 00:19:15.753 "listen_address": { 00:19:15.753 "trtype": "TCP", 00:19:15.753 "adrfam": "IPv4", 00:19:15.753 "traddr": "10.0.0.2", 00:19:15.753 "trsvcid": "4420" 00:19:15.753 }, 00:19:15.753 "peer_address": { 00:19:15.753 "trtype": "TCP", 00:19:15.753 "adrfam": "IPv4", 00:19:15.753 "traddr": "10.0.0.1", 00:19:15.753 "trsvcid": "40210" 00:19:15.753 }, 00:19:15.753 "auth": { 00:19:15.753 "state": "completed", 00:19:15.753 "digest": "sha512", 00:19:15.753 "dhgroup": "null" 00:19:15.753 } 00:19:15.753 } 00:19:15.753 ]' 00:19:15.753 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.753 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.753 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.753 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:15.753 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.753 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.753 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.753 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.013 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:19:16.584 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.585 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.585 20:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.585 20:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.585 20:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.585 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.585 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.845 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.845 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.845 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:16.845 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.845 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.845 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:16.845 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:16.845 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.845 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.845 20:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.845 20:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.845 20:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.845 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.845 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.106 00:19:17.106 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.106 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.106 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.366 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.367 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.367 20:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.367 20:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.367 20:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.367 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.367 { 00:19:17.367 "cntlid": 105, 00:19:17.367 "qid": 0, 00:19:17.367 "state": "enabled", 00:19:17.367 "thread": "nvmf_tgt_poll_group_000", 00:19:17.367 "listen_address": { 00:19:17.367 "trtype": "TCP", 00:19:17.367 "adrfam": "IPv4", 00:19:17.367 "traddr": "10.0.0.2", 00:19:17.367 "trsvcid": "4420" 00:19:17.367 }, 00:19:17.367 "peer_address": { 00:19:17.367 "trtype": "TCP", 00:19:17.367 "adrfam": "IPv4", 00:19:17.367 "traddr": "10.0.0.1", 00:19:17.367 "trsvcid": "40232" 00:19:17.367 }, 00:19:17.367 "auth": { 00:19:17.367 "state": "completed", 00:19:17.367 "digest": "sha512", 00:19:17.367 "dhgroup": "ffdhe2048" 00:19:17.367 } 00:19:17.367 } 00:19:17.367 ]' 00:19:17.367 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.367 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.367 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.367 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:17.367 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.367 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.367 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.367 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.629 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:19:18.201 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.461 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.723 00:19:18.723 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.723 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.723 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.983 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.984 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.984 20:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.984 20:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.984 20:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.984 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.984 { 00:19:18.984 "cntlid": 107, 00:19:18.984 "qid": 0, 00:19:18.984 "state": "enabled", 00:19:18.984 "thread": "nvmf_tgt_poll_group_000", 00:19:18.984 "listen_address": { 00:19:18.984 "trtype": "TCP", 00:19:18.984 "adrfam": "IPv4", 00:19:18.984 "traddr": "10.0.0.2", 00:19:18.984 "trsvcid": "4420" 00:19:18.984 }, 00:19:18.984 "peer_address": { 00:19:18.984 "trtype": "TCP", 00:19:18.984 "adrfam": "IPv4", 00:19:18.984 "traddr": "10.0.0.1", 00:19:18.984 "trsvcid": "40258" 00:19:18.984 }, 00:19:18.984 "auth": { 00:19:18.984 "state": "completed", 00:19:18.984 "digest": "sha512", 00:19:18.984 "dhgroup": "ffdhe2048" 00:19:18.984 } 00:19:18.984 } 00:19:18.984 ]' 00:19:18.984 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.984 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.984 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.984 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:18.984 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.984 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.984 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.984 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.244 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MjhkNzNkN2MwMmZjMmZlYWQxMjRkYTI0MGI5ODNkOTZdoOAg: --dhchap-ctrl-secret DHHC-1:02:OWQ1YTM3ZTczMDNjMDUxNWE5ZGE0N2Y2M2ZlNzUwZWI1ODVkYmJmOWYzOTEwMzI1m8wtMA==: 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.185 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.445 00:19:20.445 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.445 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.445 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.445 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.445 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.445 20:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.445 20:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.445 20:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.445 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.445 { 00:19:20.445 "cntlid": 109, 00:19:20.445 "qid": 0, 00:19:20.445 "state": "enabled", 00:19:20.445 "thread": "nvmf_tgt_poll_group_000", 00:19:20.445 "listen_address": { 00:19:20.445 "trtype": "TCP", 00:19:20.445 "adrfam": "IPv4", 00:19:20.445 "traddr": "10.0.0.2", 00:19:20.445 "trsvcid": "4420" 00:19:20.445 }, 00:19:20.445 "peer_address": { 00:19:20.445 "trtype": "TCP", 00:19:20.445 "adrfam": "IPv4", 00:19:20.445 "traddr": "10.0.0.1", 00:19:20.445 "trsvcid": "42820" 00:19:20.445 }, 00:19:20.445 "auth": { 00:19:20.445 "state": "completed", 00:19:20.445 "digest": "sha512", 00:19:20.445 "dhgroup": "ffdhe2048" 00:19:20.445 } 00:19:20.445 } 00:19:20.445 ]' 00:19:20.445 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.705 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.705 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.705 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:20.705 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.705 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.705 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.705 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.967 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZWIzYWQ3MzdkZDdiMjA1OGY1OTQzN2E5OGUwOGEwMTI0OWZlZTE3NGNiMDU5N2Uyhd5Z2w==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNDllYjQ0YzVjNTY1YjQwOGI0MmFkMGRmZjAxYWKP0/mJ: 00:19:21.538 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.538 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:21.538 20:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.538 20:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.538 20:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.538 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.538 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:21.538 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:21.820 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:21.820 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.820 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:21.820 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:21.820 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:21.820 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.820 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:21.820 20:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.820 20:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.820 20:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.820 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.820 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.080 00:19:22.080 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.080 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.080 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.080 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.340 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.340 20:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.340 20:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.340 20:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.340 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.340 { 00:19:22.340 "cntlid": 111, 00:19:22.340 "qid": 0, 00:19:22.340 "state": "enabled", 00:19:22.340 "thread": "nvmf_tgt_poll_group_000", 00:19:22.340 "listen_address": { 00:19:22.340 "trtype": "TCP", 00:19:22.340 "adrfam": "IPv4", 00:19:22.340 "traddr": "10.0.0.2", 00:19:22.340 "trsvcid": "4420" 00:19:22.340 }, 00:19:22.340 "peer_address": { 00:19:22.340 "trtype": "TCP", 00:19:22.340 "adrfam": "IPv4", 00:19:22.340 "traddr": "10.0.0.1", 00:19:22.340 "trsvcid": "42844" 00:19:22.340 }, 00:19:22.340 "auth": { 00:19:22.340 "state": "completed", 00:19:22.340 "digest": "sha512", 00:19:22.340 "dhgroup": "ffdhe2048" 00:19:22.340 } 00:19:22.340 } 00:19:22.340 ]' 00:19:22.340 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.340 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.340 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.340 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:22.340 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.340 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.340 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.340 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.601 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:19:23.173 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.173 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.173 20:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.173 20:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.173 20:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.173 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.173 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.173 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:23.173 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:23.434 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:23.434 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.434 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.434 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:23.434 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:23.434 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.434 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.434 20:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.435 20:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.435 20:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.435 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.435 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.695 00:19:23.695 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.695 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.695 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.956 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.956 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.956 20:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.956 20:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.956 20:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.956 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.956 { 00:19:23.956 "cntlid": 113, 00:19:23.956 "qid": 0, 00:19:23.956 "state": "enabled", 00:19:23.956 "thread": "nvmf_tgt_poll_group_000", 00:19:23.956 "listen_address": { 00:19:23.956 "trtype": "TCP", 00:19:23.956 "adrfam": "IPv4", 00:19:23.956 "traddr": "10.0.0.2", 00:19:23.956 "trsvcid": "4420" 00:19:23.956 }, 00:19:23.956 "peer_address": { 00:19:23.956 "trtype": "TCP", 00:19:23.956 "adrfam": "IPv4", 00:19:23.956 "traddr": "10.0.0.1", 00:19:23.956 "trsvcid": "42860" 00:19:23.956 }, 00:19:23.956 "auth": { 00:19:23.956 "state": "completed", 00:19:23.956 "digest": "sha512", 00:19:23.956 "dhgroup": "ffdhe3072" 00:19:23.956 } 00:19:23.956 } 00:19:23.956 ]' 00:19:23.956 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.956 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.956 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.956 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:23.956 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.956 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.956 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.956 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.336 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:19:24.935 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.935 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.935 20:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.935 20:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.935 20:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.935 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.935 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:24.935 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:25.196 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:25.196 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.196 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.196 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:25.196 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:25.196 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.196 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.196 20:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.196 20:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.196 20:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.196 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.196 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.457 00:19:25.457 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.457 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.457 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.457 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.457 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.457 20:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.457 20:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.457 20:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.457 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.457 { 00:19:25.457 "cntlid": 115, 00:19:25.457 "qid": 0, 00:19:25.457 "state": "enabled", 00:19:25.457 "thread": "nvmf_tgt_poll_group_000", 00:19:25.457 "listen_address": { 00:19:25.457 "trtype": "TCP", 00:19:25.457 "adrfam": "IPv4", 00:19:25.457 "traddr": "10.0.0.2", 00:19:25.457 "trsvcid": "4420" 00:19:25.457 }, 00:19:25.457 "peer_address": { 00:19:25.457 "trtype": "TCP", 00:19:25.457 "adrfam": "IPv4", 00:19:25.457 "traddr": "10.0.0.1", 00:19:25.457 "trsvcid": "42892" 00:19:25.457 }, 00:19:25.457 "auth": { 00:19:25.457 "state": "completed", 00:19:25.457 "digest": "sha512", 00:19:25.457 "dhgroup": "ffdhe3072" 00:19:25.457 } 00:19:25.457 } 00:19:25.457 ]' 00:19:25.457 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.457 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.457 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.730 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:25.730 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.730 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.730 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.730 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.730 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MjhkNzNkN2MwMmZjMmZlYWQxMjRkYTI0MGI5ODNkOTZdoOAg: --dhchap-ctrl-secret DHHC-1:02:OWQ1YTM3ZTczMDNjMDUxNWE5ZGE0N2Y2M2ZlNzUwZWI1ODVkYmJmOWYzOTEwMzI1m8wtMA==: 00:19:26.672 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.672 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.672 20:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.672 20:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.672 20:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.672 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.672 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:26.672 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:26.672 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:26.672 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.672 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:26.672 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:26.672 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:26.672 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.672 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.672 20:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.672 20:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.672 20:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.672 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.672 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.932 00:19:26.932 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.932 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.932 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.192 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.192 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.192 20:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.192 20:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.192 20:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.192 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.192 { 00:19:27.192 "cntlid": 117, 00:19:27.192 "qid": 0, 00:19:27.192 "state": "enabled", 00:19:27.192 "thread": "nvmf_tgt_poll_group_000", 00:19:27.192 "listen_address": { 00:19:27.193 "trtype": "TCP", 00:19:27.193 "adrfam": "IPv4", 00:19:27.193 "traddr": "10.0.0.2", 00:19:27.193 "trsvcid": "4420" 00:19:27.193 }, 00:19:27.193 "peer_address": { 00:19:27.193 "trtype": "TCP", 00:19:27.193 "adrfam": "IPv4", 00:19:27.193 "traddr": "10.0.0.1", 00:19:27.193 "trsvcid": "42920" 00:19:27.193 }, 00:19:27.193 "auth": { 00:19:27.193 "state": "completed", 00:19:27.193 "digest": "sha512", 00:19:27.193 "dhgroup": "ffdhe3072" 00:19:27.193 } 00:19:27.193 } 00:19:27.193 ]' 00:19:27.193 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.193 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.193 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.193 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:27.193 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.453 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.453 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.453 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.453 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZWIzYWQ3MzdkZDdiMjA1OGY1OTQzN2E5OGUwOGEwMTI0OWZlZTE3NGNiMDU5N2Uyhd5Z2w==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNDllYjQ0YzVjNTY1YjQwOGI0MmFkMGRmZjAxYWKP0/mJ: 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.396 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.657 00:19:28.657 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.657 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.657 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.918 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.918 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.918 20:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.918 20:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.918 20:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.918 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.918 { 00:19:28.918 "cntlid": 119, 00:19:28.918 "qid": 0, 00:19:28.918 "state": "enabled", 00:19:28.918 "thread": "nvmf_tgt_poll_group_000", 00:19:28.918 "listen_address": { 00:19:28.918 "trtype": "TCP", 00:19:28.918 "adrfam": "IPv4", 00:19:28.918 "traddr": "10.0.0.2", 00:19:28.918 "trsvcid": "4420" 00:19:28.918 }, 00:19:28.918 "peer_address": { 00:19:28.918 "trtype": "TCP", 00:19:28.918 "adrfam": "IPv4", 00:19:28.918 "traddr": "10.0.0.1", 00:19:28.918 "trsvcid": "42938" 00:19:28.918 }, 00:19:28.918 "auth": { 00:19:28.918 "state": "completed", 00:19:28.918 "digest": "sha512", 00:19:28.918 "dhgroup": "ffdhe3072" 00:19:28.918 } 00:19:28.918 } 00:19:28.918 ]' 00:19:28.918 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.918 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.918 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.918 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:28.918 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.918 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.918 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.918 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.179 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:19:29.753 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.753 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.753 20:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.753 20:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.014 20:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.014 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.014 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.014 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:30.014 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:30.014 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:30.014 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.014 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.014 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:30.014 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:30.014 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.014 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.014 20:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.014 20:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.014 20:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.014 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.014 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.274 00:19:30.274 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.274 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.274 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.534 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.534 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.534 20:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.534 20:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.534 20:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.534 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.534 { 00:19:30.534 "cntlid": 121, 00:19:30.534 "qid": 0, 00:19:30.534 "state": "enabled", 00:19:30.534 "thread": "nvmf_tgt_poll_group_000", 00:19:30.534 "listen_address": { 00:19:30.534 "trtype": "TCP", 00:19:30.534 "adrfam": "IPv4", 00:19:30.534 "traddr": "10.0.0.2", 00:19:30.534 "trsvcid": "4420" 00:19:30.534 }, 00:19:30.534 "peer_address": { 00:19:30.534 "trtype": "TCP", 00:19:30.534 "adrfam": "IPv4", 00:19:30.534 "traddr": "10.0.0.1", 00:19:30.534 "trsvcid": "58388" 00:19:30.534 }, 00:19:30.534 "auth": { 00:19:30.534 "state": "completed", 00:19:30.534 "digest": "sha512", 00:19:30.534 "dhgroup": "ffdhe4096" 00:19:30.534 } 00:19:30.534 } 00:19:30.534 ]' 00:19:30.534 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.534 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.534 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.534 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:30.534 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.534 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.534 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.534 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.793 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:19:31.732 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.732 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.733 20:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.733 20:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.733 20:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.733 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.733 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:31.733 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:31.733 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:31.733 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.733 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.733 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:31.733 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:31.733 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.733 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.733 20:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.733 20:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.733 20:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.733 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.733 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.993 00:19:31.993 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.993 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.993 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.253 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.253 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.253 20:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.253 20:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.253 20:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.253 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.253 { 00:19:32.253 "cntlid": 123, 00:19:32.253 "qid": 0, 00:19:32.253 "state": "enabled", 00:19:32.253 "thread": "nvmf_tgt_poll_group_000", 00:19:32.253 "listen_address": { 00:19:32.253 "trtype": "TCP", 00:19:32.253 "adrfam": "IPv4", 00:19:32.253 "traddr": "10.0.0.2", 00:19:32.253 "trsvcid": "4420" 00:19:32.253 }, 00:19:32.253 "peer_address": { 00:19:32.253 "trtype": "TCP", 00:19:32.253 "adrfam": "IPv4", 00:19:32.253 "traddr": "10.0.0.1", 00:19:32.253 "trsvcid": "58406" 00:19:32.253 }, 00:19:32.253 "auth": { 00:19:32.253 "state": "completed", 00:19:32.253 "digest": "sha512", 00:19:32.253 "dhgroup": "ffdhe4096" 00:19:32.253 } 00:19:32.253 } 00:19:32.253 ]' 00:19:32.253 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.253 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.253 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.253 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:32.253 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.253 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.253 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.253 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.512 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MjhkNzNkN2MwMmZjMmZlYWQxMjRkYTI0MGI5ODNkOTZdoOAg: --dhchap-ctrl-secret DHHC-1:02:OWQ1YTM3ZTczMDNjMDUxNWE5ZGE0N2Y2M2ZlNzUwZWI1ODVkYmJmOWYzOTEwMzI1m8wtMA==: 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.451 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.710 00:19:33.710 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.710 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.710 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.971 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.971 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.971 20:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.971 20:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.971 20:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.971 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.971 { 00:19:33.971 "cntlid": 125, 00:19:33.971 "qid": 0, 00:19:33.971 "state": "enabled", 00:19:33.971 "thread": "nvmf_tgt_poll_group_000", 00:19:33.971 "listen_address": { 00:19:33.971 "trtype": "TCP", 00:19:33.971 "adrfam": "IPv4", 00:19:33.971 "traddr": "10.0.0.2", 00:19:33.971 "trsvcid": "4420" 00:19:33.971 }, 00:19:33.971 "peer_address": { 00:19:33.971 "trtype": "TCP", 00:19:33.971 "adrfam": "IPv4", 00:19:33.971 "traddr": "10.0.0.1", 00:19:33.971 "trsvcid": "58440" 00:19:33.971 }, 00:19:33.971 "auth": { 00:19:33.971 "state": "completed", 00:19:33.971 "digest": "sha512", 00:19:33.971 "dhgroup": "ffdhe4096" 00:19:33.971 } 00:19:33.971 } 00:19:33.971 ]' 00:19:33.971 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.971 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.971 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.971 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:33.971 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.971 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.971 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.971 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.232 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZWIzYWQ3MzdkZDdiMjA1OGY1OTQzN2E5OGUwOGEwMTI0OWZlZTE3NGNiMDU5N2Uyhd5Z2w==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNDllYjQ0YzVjNTY1YjQwOGI0MmFkMGRmZjAxYWKP0/mJ: 00:19:34.804 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.804 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.804 20:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.804 20:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.804 20:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.804 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.804 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:34.805 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:35.066 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:35.066 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.066 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:35.066 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:35.066 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:35.066 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.066 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:35.066 20:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.066 20:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.066 20:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.066 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.066 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.327 00:19:35.327 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.327 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.327 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.588 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.588 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.588 20:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.588 20:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.588 20:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.588 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.588 { 00:19:35.588 "cntlid": 127, 00:19:35.588 "qid": 0, 00:19:35.588 "state": "enabled", 00:19:35.588 "thread": "nvmf_tgt_poll_group_000", 00:19:35.588 "listen_address": { 00:19:35.588 "trtype": "TCP", 00:19:35.588 "adrfam": "IPv4", 00:19:35.588 "traddr": "10.0.0.2", 00:19:35.588 "trsvcid": "4420" 00:19:35.588 }, 00:19:35.588 "peer_address": { 00:19:35.588 "trtype": "TCP", 00:19:35.588 "adrfam": "IPv4", 00:19:35.588 "traddr": "10.0.0.1", 00:19:35.588 "trsvcid": "58460" 00:19:35.588 }, 00:19:35.588 "auth": { 00:19:35.588 "state": "completed", 00:19:35.588 "digest": "sha512", 00:19:35.588 "dhgroup": "ffdhe4096" 00:19:35.588 } 00:19:35.588 } 00:19:35.588 ]' 00:19:35.588 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.588 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.588 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.588 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:35.588 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.588 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.588 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.588 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.849 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:19:36.419 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.419 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.419 20:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.419 20:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.419 20:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.419 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.419 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.419 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:36.419 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:36.680 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:36.680 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.680 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.680 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:36.680 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:36.680 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.680 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.680 20:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.680 20:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.680 20:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.680 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.680 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.941 00:19:36.941 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.941 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.941 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.202 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.202 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.202 20:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.202 20:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.202 20:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.202 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.202 { 00:19:37.202 "cntlid": 129, 00:19:37.202 "qid": 0, 00:19:37.202 "state": "enabled", 00:19:37.202 "thread": "nvmf_tgt_poll_group_000", 00:19:37.202 "listen_address": { 00:19:37.202 "trtype": "TCP", 00:19:37.202 "adrfam": "IPv4", 00:19:37.202 "traddr": "10.0.0.2", 00:19:37.202 "trsvcid": "4420" 00:19:37.202 }, 00:19:37.202 "peer_address": { 00:19:37.202 "trtype": "TCP", 00:19:37.202 "adrfam": "IPv4", 00:19:37.202 "traddr": "10.0.0.1", 00:19:37.202 "trsvcid": "58494" 00:19:37.202 }, 00:19:37.202 "auth": { 00:19:37.202 "state": "completed", 00:19:37.202 "digest": "sha512", 00:19:37.202 "dhgroup": "ffdhe6144" 00:19:37.202 } 00:19:37.202 } 00:19:37.202 ]' 00:19:37.202 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.202 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.202 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.202 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:37.202 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.462 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.462 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.462 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.462 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.405 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.975 00:19:38.975 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.975 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.975 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.975 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.975 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.975 20:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.975 20:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.975 20:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.975 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.975 { 00:19:38.975 "cntlid": 131, 00:19:38.975 "qid": 0, 00:19:38.975 "state": "enabled", 00:19:38.975 "thread": "nvmf_tgt_poll_group_000", 00:19:38.975 "listen_address": { 00:19:38.975 "trtype": "TCP", 00:19:38.975 "adrfam": "IPv4", 00:19:38.975 "traddr": "10.0.0.2", 00:19:38.975 "trsvcid": "4420" 00:19:38.975 }, 00:19:38.975 "peer_address": { 00:19:38.975 "trtype": "TCP", 00:19:38.975 "adrfam": "IPv4", 00:19:38.975 "traddr": "10.0.0.1", 00:19:38.975 "trsvcid": "58526" 00:19:38.975 }, 00:19:38.975 "auth": { 00:19:38.975 "state": "completed", 00:19:38.975 "digest": "sha512", 00:19:38.975 "dhgroup": "ffdhe6144" 00:19:38.975 } 00:19:38.975 } 00:19:38.975 ]' 00:19:38.975 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.975 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.975 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.975 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:38.975 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.234 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.234 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.234 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.234 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MjhkNzNkN2MwMmZjMmZlYWQxMjRkYTI0MGI5ODNkOTZdoOAg: --dhchap-ctrl-secret DHHC-1:02:OWQ1YTM3ZTczMDNjMDUxNWE5ZGE0N2Y2M2ZlNzUwZWI1ODVkYmJmOWYzOTEwMzI1m8wtMA==: 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.175 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.435 00:19:40.696 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.696 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.696 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.696 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.696 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.696 20:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.696 20:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.696 20:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.696 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.696 { 00:19:40.696 "cntlid": 133, 00:19:40.696 "qid": 0, 00:19:40.696 "state": "enabled", 00:19:40.696 "thread": "nvmf_tgt_poll_group_000", 00:19:40.696 "listen_address": { 00:19:40.696 "trtype": "TCP", 00:19:40.696 "adrfam": "IPv4", 00:19:40.696 "traddr": "10.0.0.2", 00:19:40.696 "trsvcid": "4420" 00:19:40.696 }, 00:19:40.696 "peer_address": { 00:19:40.696 "trtype": "TCP", 00:19:40.696 "adrfam": "IPv4", 00:19:40.696 "traddr": "10.0.0.1", 00:19:40.696 "trsvcid": "60482" 00:19:40.696 }, 00:19:40.696 "auth": { 00:19:40.696 "state": "completed", 00:19:40.696 "digest": "sha512", 00:19:40.696 "dhgroup": "ffdhe6144" 00:19:40.696 } 00:19:40.696 } 00:19:40.696 ]' 00:19:40.696 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.696 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.696 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.956 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:40.956 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.956 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.956 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.957 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.957 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZWIzYWQ3MzdkZDdiMjA1OGY1OTQzN2E5OGUwOGEwMTI0OWZlZTE3NGNiMDU5N2Uyhd5Z2w==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNDllYjQ0YzVjNTY1YjQwOGI0MmFkMGRmZjAxYWKP0/mJ: 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.897 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.469 00:19:42.469 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.469 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.469 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.469 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.469 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.469 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.469 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.469 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.469 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.469 { 00:19:42.469 "cntlid": 135, 00:19:42.469 "qid": 0, 00:19:42.469 "state": "enabled", 00:19:42.469 "thread": "nvmf_tgt_poll_group_000", 00:19:42.469 "listen_address": { 00:19:42.469 "trtype": "TCP", 00:19:42.469 "adrfam": "IPv4", 00:19:42.469 "traddr": "10.0.0.2", 00:19:42.469 "trsvcid": "4420" 00:19:42.469 }, 00:19:42.469 "peer_address": { 00:19:42.469 "trtype": "TCP", 00:19:42.469 "adrfam": "IPv4", 00:19:42.469 "traddr": "10.0.0.1", 00:19:42.469 "trsvcid": "60508" 00:19:42.469 }, 00:19:42.469 "auth": { 00:19:42.469 "state": "completed", 00:19:42.469 "digest": "sha512", 00:19:42.469 "dhgroup": "ffdhe6144" 00:19:42.469 } 00:19:42.469 } 00:19:42.469 ]' 00:19:42.469 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.469 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.469 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.729 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:42.729 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.729 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.729 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.729 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.729 20:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:19:43.670 20:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.671 20:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:43.671 20:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.671 20:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.671 20:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.671 20:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.671 20:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.671 20:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.671 20:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.671 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:43.671 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.671 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.671 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:43.671 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:43.671 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.671 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.671 20:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.671 20:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.671 20:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.671 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.671 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.241 00:19:44.241 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.241 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.241 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.502 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.502 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.502 20:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.502 20:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.502 20:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.502 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.502 { 00:19:44.502 "cntlid": 137, 00:19:44.502 "qid": 0, 00:19:44.502 "state": "enabled", 00:19:44.502 "thread": "nvmf_tgt_poll_group_000", 00:19:44.502 "listen_address": { 00:19:44.502 "trtype": "TCP", 00:19:44.502 "adrfam": "IPv4", 00:19:44.502 "traddr": "10.0.0.2", 00:19:44.502 "trsvcid": "4420" 00:19:44.502 }, 00:19:44.502 "peer_address": { 00:19:44.502 "trtype": "TCP", 00:19:44.502 "adrfam": "IPv4", 00:19:44.502 "traddr": "10.0.0.1", 00:19:44.502 "trsvcid": "60528" 00:19:44.502 }, 00:19:44.502 "auth": { 00:19:44.502 "state": "completed", 00:19:44.502 "digest": "sha512", 00:19:44.502 "dhgroup": "ffdhe8192" 00:19:44.502 } 00:19:44.502 } 00:19:44.502 ]' 00:19:44.502 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.502 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.502 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.502 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:44.502 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.763 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.763 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.763 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.763 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:19:45.733 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.733 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.733 20:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.733 20:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.733 20:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.733 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.733 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:45.733 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:45.733 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:45.733 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.733 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:45.733 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:45.733 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:45.733 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.734 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.734 20:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.734 20:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.734 20:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.734 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.734 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.302 00:19:46.302 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.302 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.302 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.562 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.562 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.562 20:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.562 20:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.562 20:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.562 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.562 { 00:19:46.562 "cntlid": 139, 00:19:46.562 "qid": 0, 00:19:46.562 "state": "enabled", 00:19:46.562 "thread": "nvmf_tgt_poll_group_000", 00:19:46.562 "listen_address": { 00:19:46.563 "trtype": "TCP", 00:19:46.563 "adrfam": "IPv4", 00:19:46.563 "traddr": "10.0.0.2", 00:19:46.563 "trsvcid": "4420" 00:19:46.563 }, 00:19:46.563 "peer_address": { 00:19:46.563 "trtype": "TCP", 00:19:46.563 "adrfam": "IPv4", 00:19:46.563 "traddr": "10.0.0.1", 00:19:46.563 "trsvcid": "60550" 00:19:46.563 }, 00:19:46.563 "auth": { 00:19:46.563 "state": "completed", 00:19:46.563 "digest": "sha512", 00:19:46.563 "dhgroup": "ffdhe8192" 00:19:46.563 } 00:19:46.563 } 00:19:46.563 ]' 00:19:46.563 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.563 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.563 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.563 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:46.563 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.563 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.563 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.563 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.823 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MjhkNzNkN2MwMmZjMmZlYWQxMjRkYTI0MGI5ODNkOTZdoOAg: --dhchap-ctrl-secret DHHC-1:02:OWQ1YTM3ZTczMDNjMDUxNWE5ZGE0N2Y2M2ZlNzUwZWI1ODVkYmJmOWYzOTEwMzI1m8wtMA==: 00:19:47.395 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.395 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.395 20:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.395 20:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.395 20:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.395 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.395 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:47.395 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:47.654 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:47.654 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.655 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:47.655 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:47.655 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:47.655 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.655 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.655 20:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.655 20:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.655 20:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.655 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.655 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.279 00:19:48.279 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.279 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.279 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.279 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.279 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.279 20:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.279 20:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.280 20:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.280 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.280 { 00:19:48.280 "cntlid": 141, 00:19:48.280 "qid": 0, 00:19:48.280 "state": "enabled", 00:19:48.280 "thread": "nvmf_tgt_poll_group_000", 00:19:48.280 "listen_address": { 00:19:48.280 "trtype": "TCP", 00:19:48.280 "adrfam": "IPv4", 00:19:48.280 "traddr": "10.0.0.2", 00:19:48.280 "trsvcid": "4420" 00:19:48.280 }, 00:19:48.280 "peer_address": { 00:19:48.280 "trtype": "TCP", 00:19:48.280 "adrfam": "IPv4", 00:19:48.280 "traddr": "10.0.0.1", 00:19:48.280 "trsvcid": "60578" 00:19:48.280 }, 00:19:48.280 "auth": { 00:19:48.280 "state": "completed", 00:19:48.280 "digest": "sha512", 00:19:48.280 "dhgroup": "ffdhe8192" 00:19:48.280 } 00:19:48.280 } 00:19:48.280 ]' 00:19:48.280 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.280 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.280 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.539 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:48.539 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.539 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.539 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.539 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.539 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZWIzYWQ3MzdkZDdiMjA1OGY1OTQzN2E5OGUwOGEwMTI0OWZlZTE3NGNiMDU5N2Uyhd5Z2w==: --dhchap-ctrl-secret DHHC-1:01:ZjJhNDllYjQ0YzVjNTY1YjQwOGI0MmFkMGRmZjAxYWKP0/mJ: 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.482 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.055 00:19:50.055 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.055 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.055 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.315 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.315 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.315 20:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.315 20:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.315 20:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.315 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.315 { 00:19:50.315 "cntlid": 143, 00:19:50.315 "qid": 0, 00:19:50.315 "state": "enabled", 00:19:50.315 "thread": "nvmf_tgt_poll_group_000", 00:19:50.315 "listen_address": { 00:19:50.315 "trtype": "TCP", 00:19:50.315 "adrfam": "IPv4", 00:19:50.315 "traddr": "10.0.0.2", 00:19:50.315 "trsvcid": "4420" 00:19:50.315 }, 00:19:50.315 "peer_address": { 00:19:50.315 "trtype": "TCP", 00:19:50.315 "adrfam": "IPv4", 00:19:50.315 "traddr": "10.0.0.1", 00:19:50.315 "trsvcid": "60608" 00:19:50.315 }, 00:19:50.315 "auth": { 00:19:50.315 "state": "completed", 00:19:50.315 "digest": "sha512", 00:19:50.315 "dhgroup": "ffdhe8192" 00:19:50.315 } 00:19:50.315 } 00:19:50.315 ]' 00:19:50.315 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.315 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.315 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.315 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:50.315 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.315 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.315 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.315 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.576 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.519 20:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.520 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.520 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.091 00:19:52.091 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.091 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.091 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.091 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.091 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.091 20:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.091 20:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.091 20:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.091 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.091 { 00:19:52.091 "cntlid": 145, 00:19:52.091 "qid": 0, 00:19:52.091 "state": "enabled", 00:19:52.091 "thread": "nvmf_tgt_poll_group_000", 00:19:52.091 "listen_address": { 00:19:52.091 "trtype": "TCP", 00:19:52.091 "adrfam": "IPv4", 00:19:52.091 "traddr": "10.0.0.2", 00:19:52.091 "trsvcid": "4420" 00:19:52.091 }, 00:19:52.091 "peer_address": { 00:19:52.091 "trtype": "TCP", 00:19:52.091 "adrfam": "IPv4", 00:19:52.091 "traddr": "10.0.0.1", 00:19:52.091 "trsvcid": "59434" 00:19:52.091 }, 00:19:52.091 "auth": { 00:19:52.091 "state": "completed", 00:19:52.091 "digest": "sha512", 00:19:52.091 "dhgroup": "ffdhe8192" 00:19:52.091 } 00:19:52.091 } 00:19:52.091 ]' 00:19:52.091 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.352 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.352 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.352 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:52.352 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.352 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.352 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.352 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.612 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZTkwYjUzYTg3ZDdjYjBkNGYyMzY4ZGYzYzAxNzcyYTM1ZjI2YmEyOWE4NzY2YmVlkcHw7g==: --dhchap-ctrl-secret DHHC-1:03:OTE5OGJlMzcyMzBjMTMwOTFmYmI3NzI5YzRlNTU5ZTYwYzc3YzMzZWM3N2I1MDliNWRkZTMxYmY4MjE4ZTQ0Y0vN+YA=: 00:19:53.182 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.182 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.182 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.182 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.182 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.182 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:53.182 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.182 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.182 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.182 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:53.182 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:53.182 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:53.182 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:53.182 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:53.182 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:53.182 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:53.182 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:53.182 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:53.752 request: 00:19:53.752 { 00:19:53.752 "name": "nvme0", 00:19:53.752 "trtype": "tcp", 00:19:53.752 "traddr": "10.0.0.2", 00:19:53.752 "adrfam": "ipv4", 00:19:53.752 "trsvcid": "4420", 00:19:53.752 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:53.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:53.752 "prchk_reftag": false, 00:19:53.752 "prchk_guard": false, 00:19:53.752 "hdgst": false, 00:19:53.752 "ddgst": false, 00:19:53.752 "dhchap_key": "key2", 00:19:53.752 "method": "bdev_nvme_attach_controller", 00:19:53.752 "req_id": 1 00:19:53.752 } 00:19:53.752 Got JSON-RPC error response 00:19:53.752 response: 00:19:53.752 { 00:19:53.752 "code": -5, 00:19:53.752 "message": "Input/output error" 00:19:53.752 } 00:19:53.752 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:53.752 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:53.752 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:53.752 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:53.752 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.752 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.752 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.752 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.753 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.753 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.753 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.753 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.753 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:53.753 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:53.753 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:53.753 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:53.753 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:53.753 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:53.753 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:53.753 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:53.753 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:54.324 request: 00:19:54.324 { 00:19:54.324 "name": "nvme0", 00:19:54.324 "trtype": "tcp", 00:19:54.324 "traddr": "10.0.0.2", 00:19:54.324 "adrfam": "ipv4", 00:19:54.324 "trsvcid": "4420", 00:19:54.324 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:54.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:54.324 "prchk_reftag": false, 00:19:54.324 "prchk_guard": false, 00:19:54.324 "hdgst": false, 00:19:54.324 "ddgst": false, 00:19:54.324 "dhchap_key": "key1", 00:19:54.324 "dhchap_ctrlr_key": "ckey2", 00:19:54.324 "method": "bdev_nvme_attach_controller", 00:19:54.324 "req_id": 1 00:19:54.324 } 00:19:54.324 Got JSON-RPC error response 00:19:54.324 response: 00:19:54.324 { 00:19:54.324 "code": -5, 00:19:54.324 "message": "Input/output error" 00:19:54.324 } 00:19:54.324 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:54.324 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:54.324 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:54.324 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:54.325 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.325 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.325 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.325 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.325 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:54.325 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.325 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.325 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.325 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.325 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:54.325 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.325 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:54.325 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:54.325 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:54.325 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:54.325 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.325 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.908 request: 00:19:54.909 { 00:19:54.909 "name": "nvme0", 00:19:54.909 "trtype": "tcp", 00:19:54.909 "traddr": "10.0.0.2", 00:19:54.909 "adrfam": "ipv4", 00:19:54.909 "trsvcid": "4420", 00:19:54.909 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:54.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:54.909 "prchk_reftag": false, 00:19:54.909 "prchk_guard": false, 00:19:54.909 "hdgst": false, 00:19:54.909 "ddgst": false, 00:19:54.909 "dhchap_key": "key1", 00:19:54.909 "dhchap_ctrlr_key": "ckey1", 00:19:54.909 "method": "bdev_nvme_attach_controller", 00:19:54.909 "req_id": 1 00:19:54.909 } 00:19:54.909 Got JSON-RPC error response 00:19:54.909 response: 00:19:54.909 { 00:19:54.909 "code": -5, 00:19:54.909 "message": "Input/output error" 00:19:54.909 } 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 982197 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 982197 ']' 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 982197 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 982197 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 982197' 00:19:54.909 killing process with pid 982197 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 982197 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 982197 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1008589 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1008589 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1008589 ']' 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.909 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.849 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.849 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:55.849 20:14:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:55.849 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:55.849 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.849 20:14:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.849 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:55.849 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1008589 00:19:55.849 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1008589 ']' 00:19:55.849 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.849 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.849 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.849 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.849 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.110 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.110 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:56.110 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:56.110 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.110 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.110 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.110 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:56.110 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.110 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:56.110 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:56.110 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:56.110 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.110 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:56.110 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.110 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.110 20:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.110 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.110 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.681 00:19:56.681 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.681 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.681 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.942 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.942 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.942 20:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.942 20:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.942 20:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.942 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.942 { 00:19:56.942 "cntlid": 1, 00:19:56.942 "qid": 0, 00:19:56.942 "state": "enabled", 00:19:56.942 "thread": "nvmf_tgt_poll_group_000", 00:19:56.942 "listen_address": { 00:19:56.942 "trtype": "TCP", 00:19:56.942 "adrfam": "IPv4", 00:19:56.942 "traddr": "10.0.0.2", 00:19:56.942 "trsvcid": "4420" 00:19:56.942 }, 00:19:56.942 "peer_address": { 00:19:56.942 "trtype": "TCP", 00:19:56.942 "adrfam": "IPv4", 00:19:56.942 "traddr": "10.0.0.1", 00:19:56.942 "trsvcid": "59504" 00:19:56.942 }, 00:19:56.942 "auth": { 00:19:56.942 "state": "completed", 00:19:56.942 "digest": "sha512", 00:19:56.942 "dhgroup": "ffdhe8192" 00:19:56.942 } 00:19:56.942 } 00:19:56.942 ]' 00:19:56.942 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.942 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.942 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.942 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.942 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.942 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.942 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.942 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.202 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MTNkMDQ4OTlhMzY2Mjc5ZjkxNDgwYTlmYTk1NDM5ZDNhMmU3ODc5YzQxMGQ2MjZmOGUzOWRhYjU1NGI2NGQwZR4X0xs=: 00:19:57.771 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.032 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.293 request: 00:19:58.293 { 00:19:58.293 "name": "nvme0", 00:19:58.293 "trtype": "tcp", 00:19:58.293 "traddr": "10.0.0.2", 00:19:58.293 "adrfam": "ipv4", 00:19:58.293 "trsvcid": "4420", 00:19:58.293 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:58.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:58.293 "prchk_reftag": false, 00:19:58.293 "prchk_guard": false, 00:19:58.293 "hdgst": false, 00:19:58.293 "ddgst": false, 00:19:58.293 "dhchap_key": "key3", 00:19:58.293 "method": "bdev_nvme_attach_controller", 00:19:58.293 "req_id": 1 00:19:58.293 } 00:19:58.293 Got JSON-RPC error response 00:19:58.293 response: 00:19:58.293 { 00:19:58.293 "code": -5, 00:19:58.293 "message": "Input/output error" 00:19:58.293 } 00:19:58.293 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:58.293 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:58.294 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:58.294 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:58.294 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:58.294 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:58.294 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:58.294 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:58.294 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.294 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:58.294 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.294 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:58.294 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.294 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:58.294 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.294 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.294 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.555 request: 00:19:58.555 { 00:19:58.555 "name": "nvme0", 00:19:58.555 "trtype": "tcp", 00:19:58.555 "traddr": "10.0.0.2", 00:19:58.555 "adrfam": "ipv4", 00:19:58.555 "trsvcid": "4420", 00:19:58.555 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:58.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:58.555 "prchk_reftag": false, 00:19:58.555 "prchk_guard": false, 00:19:58.555 "hdgst": false, 00:19:58.555 "ddgst": false, 00:19:58.555 "dhchap_key": "key3", 00:19:58.555 "method": "bdev_nvme_attach_controller", 00:19:58.555 "req_id": 1 00:19:58.555 } 00:19:58.555 Got JSON-RPC error response 00:19:58.555 response: 00:19:58.555 { 00:19:58.555 "code": -5, 00:19:58.555 "message": "Input/output error" 00:19:58.555 } 00:19:58.555 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:58.555 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:58.555 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:58.555 20:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:58.555 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:58.555 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:58.555 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:58.555 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:58.555 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:58.555 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:58.816 request: 00:19:58.816 { 00:19:58.816 "name": "nvme0", 00:19:58.816 "trtype": "tcp", 00:19:58.816 "traddr": "10.0.0.2", 00:19:58.816 "adrfam": "ipv4", 00:19:58.816 "trsvcid": "4420", 00:19:58.816 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:58.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:58.816 "prchk_reftag": false, 00:19:58.816 "prchk_guard": false, 00:19:58.816 "hdgst": false, 00:19:58.816 "ddgst": false, 00:19:58.816 "dhchap_key": "key0", 00:19:58.816 "dhchap_ctrlr_key": "key1", 00:19:58.816 "method": "bdev_nvme_attach_controller", 00:19:58.816 "req_id": 1 00:19:58.816 } 00:19:58.816 Got JSON-RPC error response 00:19:58.816 response: 00:19:58.816 { 00:19:58.816 "code": -5, 00:19:58.816 "message": "Input/output error" 00:19:58.816 } 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:58.816 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:59.076 00:19:59.076 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:59.076 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:59.076 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.336 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.336 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.336 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.595 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:59.595 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:59.595 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 982329 00:19:59.595 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 982329 ']' 00:19:59.595 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 982329 00:19:59.595 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:59.595 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.595 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 982329 00:19:59.595 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:59.595 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:59.595 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 982329' 00:19:59.595 killing process with pid 982329 00:19:59.595 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 982329 00:19:59.595 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 982329 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:59.855 rmmod nvme_tcp 00:19:59.855 rmmod nvme_fabrics 00:19:59.855 rmmod nvme_keyring 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1008589 ']' 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1008589 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1008589 ']' 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1008589 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1008589 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1008589' 00:19:59.855 killing process with pid 1008589 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1008589 00:19:59.855 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1008589 00:20:00.116 20:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:00.116 20:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:00.116 20:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:00.116 20:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:00.116 20:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:00.116 20:14:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.116 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.116 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.030 20:14:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:02.030 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.2gM /tmp/spdk.key-sha256.yMh /tmp/spdk.key-sha384.FY1 /tmp/spdk.key-sha512.RxJ /tmp/spdk.key-sha512.8ZE /tmp/spdk.key-sha384.3aj /tmp/spdk.key-sha256.rnF '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:02.030 00:20:02.030 real 2m24.170s 00:20:02.030 user 5m20.241s 00:20:02.030 sys 0m21.712s 00:20:02.030 20:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:02.030 20:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.030 ************************************ 00:20:02.030 END TEST nvmf_auth_target 00:20:02.030 ************************************ 00:20:02.030 20:14:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:02.030 20:14:59 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:20:02.030 20:14:59 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:02.030 20:14:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:02.030 20:14:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.030 20:14:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:02.291 ************************************ 00:20:02.291 START TEST nvmf_bdevio_no_huge 00:20:02.291 ************************************ 00:20:02.291 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:02.291 * Looking for test storage... 00:20:02.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:02.291 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.291 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:02.291 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:02.292 20:14:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.928 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:08.929 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:08.929 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:08.929 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:08.929 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:08.929 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:09.190 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:09.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:20:09.191 00:20:09.191 --- 10.0.0.2 ping statistics --- 00:20:09.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.191 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:09.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:20:09.191 00:20:09.191 --- 10.0.0.1 ping statistics --- 00:20:09.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.191 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1013751 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1013751 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1013751 ']' 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:09.191 20:15:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:09.451 [2024-07-15 20:15:06.682467] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:20:09.451 [2024-07-15 20:15:06.682537] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:09.451 [2024-07-15 20:15:06.776113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:09.451 [2024-07-15 20:15:06.883970] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.451 [2024-07-15 20:15:06.884022] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.451 [2024-07-15 20:15:06.884031] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.451 [2024-07-15 20:15:06.884038] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.451 [2024-07-15 20:15:06.884044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.451 [2024-07-15 20:15:06.884231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:09.451 [2024-07-15 20:15:06.884385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:09.451 [2024-07-15 20:15:06.884543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:09.451 [2024-07-15 20:15:06.884543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:10.394 [2024-07-15 20:15:07.525716] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:10.394 Malloc0 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:10.394 [2024-07-15 20:15:07.579505] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:10.394 { 00:20:10.394 "params": { 00:20:10.394 "name": "Nvme$subsystem", 00:20:10.394 "trtype": "$TEST_TRANSPORT", 00:20:10.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:10.394 "adrfam": "ipv4", 00:20:10.394 "trsvcid": "$NVMF_PORT", 00:20:10.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:10.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:10.394 "hdgst": ${hdgst:-false}, 00:20:10.394 "ddgst": ${ddgst:-false} 00:20:10.394 }, 00:20:10.394 "method": "bdev_nvme_attach_controller" 00:20:10.394 } 00:20:10.394 EOF 00:20:10.394 )") 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:10.394 20:15:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:10.394 "params": { 00:20:10.394 "name": "Nvme1", 00:20:10.395 "trtype": "tcp", 00:20:10.395 "traddr": "10.0.0.2", 00:20:10.395 "adrfam": "ipv4", 00:20:10.395 "trsvcid": "4420", 00:20:10.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.395 "hdgst": false, 00:20:10.395 "ddgst": false 00:20:10.395 }, 00:20:10.395 "method": "bdev_nvme_attach_controller" 00:20:10.395 }' 00:20:10.395 [2024-07-15 20:15:07.634475] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:20:10.395 [2024-07-15 20:15:07.634545] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1013786 ] 00:20:10.395 [2024-07-15 20:15:07.704198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:10.395 [2024-07-15 20:15:07.801873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.395 [2024-07-15 20:15:07.801991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.395 [2024-07-15 20:15:07.801994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.654 I/O targets: 00:20:10.654 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:10.654 00:20:10.654 00:20:10.654 CUnit - A unit testing framework for C - Version 2.1-3 00:20:10.654 http://cunit.sourceforge.net/ 00:20:10.654 00:20:10.654 00:20:10.654 Suite: bdevio tests on: Nvme1n1 00:20:10.654 Test: blockdev write read block ...passed 00:20:10.654 Test: blockdev write zeroes read block ...passed 00:20:10.654 Test: blockdev write zeroes read no split ...passed 00:20:10.654 Test: blockdev write zeroes read split ...passed 00:20:10.913 Test: blockdev write zeroes read split partial ...passed 00:20:10.914 Test: blockdev reset ...[2024-07-15 20:15:08.139558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:10.914 [2024-07-15 20:15:08.139620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa7c10 (9): Bad file descriptor 00:20:10.914 [2024-07-15 20:15:08.158639] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:10.914 passed 00:20:10.914 Test: blockdev write read 8 blocks ...passed 00:20:10.914 Test: blockdev write read size > 128k ...passed 00:20:10.914 Test: blockdev write read invalid size ...passed 00:20:10.914 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:10.914 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:10.914 Test: blockdev write read max offset ...passed 00:20:10.914 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:10.914 Test: blockdev writev readv 8 blocks ...passed 00:20:10.914 Test: blockdev writev readv 30 x 1block ...passed 00:20:11.175 Test: blockdev writev readv block ...passed 00:20:11.175 Test: blockdev writev readv size > 128k ...passed 00:20:11.175 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:11.175 Test: blockdev comparev and writev ...[2024-07-15 20:15:08.387211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.175 [2024-07-15 20:15:08.387235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.175 [2024-07-15 20:15:08.387246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.175 [2024-07-15 20:15:08.387252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:11.175 [2024-07-15 20:15:08.387818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.175 [2024-07-15 20:15:08.387826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:11.175 [2024-07-15 20:15:08.387835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.175 [2024-07-15 20:15:08.387844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:11.175 [2024-07-15 20:15:08.388376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.175 [2024-07-15 20:15:08.388384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:11.175 [2024-07-15 20:15:08.388393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.175 [2024-07-15 20:15:08.388398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:11.175 [2024-07-15 20:15:08.388959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.175 [2024-07-15 20:15:08.388966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:11.175 [2024-07-15 20:15:08.388975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.175 [2024-07-15 20:15:08.388980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:11.175 passed 00:20:11.175 Test: blockdev nvme passthru rw ...passed 00:20:11.175 Test: blockdev nvme passthru vendor specific ...[2024-07-15 20:15:08.474162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:11.175 [2024-07-15 20:15:08.474172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:11.175 [2024-07-15 20:15:08.474699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:11.175 [2024-07-15 20:15:08.474706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:11.175 [2024-07-15 20:15:08.475139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:11.175 [2024-07-15 20:15:08.475146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:11.175 [2024-07-15 20:15:08.475502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:11.175 [2024-07-15 20:15:08.475510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:11.175 passed 00:20:11.175 Test: blockdev nvme admin passthru ...passed 00:20:11.175 Test: blockdev copy ...passed 00:20:11.175 00:20:11.175 Run Summary: Type Total Ran Passed Failed Inactive 00:20:11.175 suites 1 1 n/a 0 0 00:20:11.175 tests 23 23 23 0 0 00:20:11.175 asserts 152 152 152 0 n/a 00:20:11.175 00:20:11.175 Elapsed time = 1.174 seconds 00:20:11.437 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.437 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.437 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:11.437 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.437 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:11.437 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:11.437 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:11.437 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:11.437 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:11.437 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:11.437 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:11.437 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:11.437 rmmod nvme_tcp 00:20:11.437 rmmod nvme_fabrics 00:20:11.437 rmmod nvme_keyring 00:20:11.437 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:11.437 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:11.700 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:11.700 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1013751 ']' 00:20:11.700 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1013751 00:20:11.700 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1013751 ']' 00:20:11.700 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1013751 00:20:11.700 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:11.700 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:11.700 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1013751 00:20:11.700 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:11.700 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:11.700 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1013751' 00:20:11.700 killing process with pid 1013751 00:20:11.700 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1013751 00:20:11.700 20:15:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1013751 00:20:11.962 20:15:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:11.962 20:15:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:11.962 20:15:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:11.962 20:15:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:11.962 20:15:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:11.962 20:15:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.962 20:15:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.962 20:15:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.506 20:15:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:14.506 00:20:14.506 real 0m11.900s 00:20:14.506 user 0m13.198s 00:20:14.506 sys 0m6.223s 00:20:14.506 20:15:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:14.506 20:15:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.506 ************************************ 00:20:14.506 END TEST nvmf_bdevio_no_huge 00:20:14.506 ************************************ 00:20:14.506 20:15:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:14.506 20:15:11 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:14.506 20:15:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:14.506 20:15:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:14.506 20:15:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:14.506 ************************************ 00:20:14.506 START TEST nvmf_tls 00:20:14.506 ************************************ 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:14.506 * Looking for test storage... 00:20:14.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.506 20:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.507 20:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.507 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:14.507 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:14.507 20:15:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:14.507 20:15:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:21.111 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:21.111 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:21.111 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:21.111 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.111 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.371 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.371 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.371 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:21.371 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.371 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.371 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.371 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:21.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:20:21.371 00:20:21.371 --- 10.0.0.2 ping statistics --- 00:20:21.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.371 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:20:21.371 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.455 ms 00:20:21.632 00:20:21.632 --- 10.0.0.1 ping statistics --- 00:20:21.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.632 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1018771 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1018771 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1018771 ']' 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.632 20:15:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.632 [2024-07-15 20:15:18.917099] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:20:21.632 [2024-07-15 20:15:18.917210] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.632 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.632 [2024-07-15 20:15:19.008793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.892 [2024-07-15 20:15:19.102594] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.892 [2024-07-15 20:15:19.102646] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.892 [2024-07-15 20:15:19.102655] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.892 [2024-07-15 20:15:19.102662] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.892 [2024-07-15 20:15:19.102668] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.892 [2024-07-15 20:15:19.102692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.463 20:15:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.463 20:15:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:22.463 20:15:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:22.463 20:15:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:22.463 20:15:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.463 20:15:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.463 20:15:19 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:22.463 20:15:19 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:22.723 true 00:20:22.723 20:15:19 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:22.723 20:15:19 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:22.723 20:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:22.723 20:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:22.723 20:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:22.984 20:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:22.984 20:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:23.244 20:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:23.244 20:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:23.244 20:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:23.244 20:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:23.244 20:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:23.503 20:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:23.503 20:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:23.503 20:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:23.503 20:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:23.764 20:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:23.764 20:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:23.764 20:15:20 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:23.764 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:23.764 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:24.025 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:24.025 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:24.025 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:24.285 20:15:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:24.545 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:24.545 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:24.545 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.AilF3z35eE 00:20:24.545 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:24.545 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.FTw5Yymwu5 00:20:24.545 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:24.545 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:24.545 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.AilF3z35eE 00:20:24.545 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.FTw5Yymwu5 00:20:24.545 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:24.545 20:15:21 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:24.805 20:15:22 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.AilF3z35eE 00:20:24.805 20:15:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.AilF3z35eE 00:20:24.805 20:15:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:25.065 [2024-07-15 20:15:22.274543] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.065 20:15:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:25.065 20:15:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:25.324 [2024-07-15 20:15:22.583297] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:25.324 [2024-07-15 20:15:22.583469] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.324 20:15:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:25.324 malloc0 00:20:25.324 20:15:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:25.589 20:15:22 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AilF3z35eE 00:20:25.849 [2024-07-15 20:15:23.038330] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:25.849 20:15:23 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.AilF3z35eE 00:20:25.849 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.844 Initializing NVMe Controllers 00:20:35.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:35.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:35.844 Initialization complete. Launching workers. 00:20:35.844 ======================================================== 00:20:35.844 Latency(us) 00:20:35.844 Device Information : IOPS MiB/s Average min max 00:20:35.844 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18949.57 74.02 3377.41 1039.95 5323.04 00:20:35.844 ======================================================== 00:20:35.844 Total : 18949.57 74.02 3377.41 1039.95 5323.04 00:20:35.844 00:20:35.844 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AilF3z35eE 00:20:35.844 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:35.844 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:35.844 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:35.844 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.AilF3z35eE' 00:20:35.844 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.844 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1021629 00:20:35.845 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.845 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1021629 /var/tmp/bdevperf.sock 00:20:35.845 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.845 20:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1021629 ']' 00:20:35.845 20:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.845 20:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:35.845 20:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.845 20:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:35.845 20:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.845 [2024-07-15 20:15:33.205977] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:20:35.845 [2024-07-15 20:15:33.206030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021629 ] 00:20:35.845 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.845 [2024-07-15 20:15:33.254749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.105 [2024-07-15 20:15:33.306847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.694 20:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.694 20:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:36.694 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AilF3z35eE 00:20:36.694 [2024-07-15 20:15:34.095671] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.694 [2024-07-15 20:15:34.095730] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:36.985 TLSTESTn1 00:20:36.985 20:15:34 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:36.985 Running I/O for 10 seconds... 00:20:46.978 00:20:46.978 Latency(us) 00:20:46.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.979 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:46.979 Verification LBA range: start 0x0 length 0x2000 00:20:46.979 TLSTESTn1 : 10.07 2451.75 9.58 0.00 0.00 52040.23 4778.67 138062.51 00:20:46.979 =================================================================================================================== 00:20:46.979 Total : 2451.75 9.58 0.00 0.00 52040.23 4778.67 138062.51 00:20:46.979 0 00:20:46.979 20:15:44 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:46.979 20:15:44 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1021629 00:20:46.979 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1021629 ']' 00:20:46.979 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1021629 00:20:46.979 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:46.979 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:46.979 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1021629 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1021629' 00:20:47.239 killing process with pid 1021629 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1021629 00:20:47.239 Received shutdown signal, test time was about 10.000000 seconds 00:20:47.239 00:20:47.239 Latency(us) 00:20:47.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.239 =================================================================================================================== 00:20:47.239 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:47.239 [2024-07-15 20:15:44.447781] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1021629 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FTw5Yymwu5 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FTw5Yymwu5 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FTw5Yymwu5 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FTw5Yymwu5' 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1023702 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1023702 /var/tmp/bdevperf.sock 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1023702 ']' 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:47.239 20:15:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.239 [2024-07-15 20:15:44.611270] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:20:47.239 [2024-07-15 20:15:44.611326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023702 ] 00:20:47.239 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.239 [2024-07-15 20:15:44.660369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.499 [2024-07-15 20:15:44.711886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.068 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.068 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:48.068 20:15:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FTw5Yymwu5 00:20:48.068 [2024-07-15 20:15:45.500824] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.068 [2024-07-15 20:15:45.500880] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:48.327 [2024-07-15 20:15:45.508357] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:48.327 [2024-07-15 20:15:45.508900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250eec0 (107): Transport endpoint is not connected 00:20:48.327 [2024-07-15 20:15:45.509895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250eec0 (9): Bad file descriptor 00:20:48.327 [2024-07-15 20:15:45.510896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:48.327 [2024-07-15 20:15:45.510906] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:48.327 [2024-07-15 20:15:45.510916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:48.327 request: 00:20:48.327 { 00:20:48.327 "name": "TLSTEST", 00:20:48.327 "trtype": "tcp", 00:20:48.327 "traddr": "10.0.0.2", 00:20:48.327 "adrfam": "ipv4", 00:20:48.327 "trsvcid": "4420", 00:20:48.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.327 "prchk_reftag": false, 00:20:48.327 "prchk_guard": false, 00:20:48.327 "hdgst": false, 00:20:48.327 "ddgst": false, 00:20:48.327 "psk": "/tmp/tmp.FTw5Yymwu5", 00:20:48.328 "method": "bdev_nvme_attach_controller", 00:20:48.328 "req_id": 1 00:20:48.328 } 00:20:48.328 Got JSON-RPC error response 00:20:48.328 response: 00:20:48.328 { 00:20:48.328 "code": -5, 00:20:48.328 "message": "Input/output error" 00:20:48.328 } 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1023702 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1023702 ']' 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1023702 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1023702 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1023702' 00:20:48.328 killing process with pid 1023702 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1023702 00:20:48.328 Received shutdown signal, test time was about 10.000000 seconds 00:20:48.328 00:20:48.328 Latency(us) 00:20:48.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.328 =================================================================================================================== 00:20:48.328 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:48.328 [2024-07-15 20:15:45.583412] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1023702 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.AilF3z35eE 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.AilF3z35eE 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.AilF3z35eE 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.AilF3z35eE' 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1023992 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1023992 /var/tmp/bdevperf.sock 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1023992 ']' 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:48.328 20:15:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.328 [2024-07-15 20:15:45.741257] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:20:48.328 [2024-07-15 20:15:45.741312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023992 ] 00:20:48.588 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.588 [2024-07-15 20:15:45.791308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.588 [2024-07-15 20:15:45.841917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.157 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:49.157 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:49.157 20:15:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.AilF3z35eE 00:20:49.418 [2024-07-15 20:15:46.650777] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:49.418 [2024-07-15 20:15:46.650841] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:49.418 [2024-07-15 20:15:46.661086] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:49.418 [2024-07-15 20:15:46.661103] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:49.418 [2024-07-15 20:15:46.661128] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:49.418 [2024-07-15 20:15:46.661951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x706ec0 (107): Transport endpoint is not connected 00:20:49.418 [2024-07-15 20:15:46.662945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x706ec0 (9): Bad file descriptor 00:20:49.418 [2024-07-15 20:15:46.663948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:49.418 [2024-07-15 20:15:46.663955] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:49.418 [2024-07-15 20:15:46.663963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:49.418 request: 00:20:49.418 { 00:20:49.418 "name": "TLSTEST", 00:20:49.418 "trtype": "tcp", 00:20:49.418 "traddr": "10.0.0.2", 00:20:49.418 "adrfam": "ipv4", 00:20:49.418 "trsvcid": "4420", 00:20:49.418 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.418 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:49.418 "prchk_reftag": false, 00:20:49.418 "prchk_guard": false, 00:20:49.418 "hdgst": false, 00:20:49.418 "ddgst": false, 00:20:49.418 "psk": "/tmp/tmp.AilF3z35eE", 00:20:49.418 "method": "bdev_nvme_attach_controller", 00:20:49.418 "req_id": 1 00:20:49.418 } 00:20:49.418 Got JSON-RPC error response 00:20:49.418 response: 00:20:49.418 { 00:20:49.418 "code": -5, 00:20:49.418 "message": "Input/output error" 00:20:49.418 } 00:20:49.418 20:15:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1023992 00:20:49.418 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1023992 ']' 00:20:49.418 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1023992 00:20:49.418 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:49.418 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:49.418 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1023992 00:20:49.418 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:49.418 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:49.418 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1023992' 00:20:49.418 killing process with pid 1023992 00:20:49.418 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1023992 00:20:49.418 Received shutdown signal, test time was about 10.000000 seconds 00:20:49.418 00:20:49.418 Latency(us) 00:20:49.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.418 =================================================================================================================== 00:20:49.418 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:49.418 [2024-07-15 20:15:46.750289] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:49.418 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1023992 00:20:49.418 20:15:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.AilF3z35eE 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.AilF3z35eE 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.AilF3z35eE 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.AilF3z35eE' 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1024324 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1024324 /var/tmp/bdevperf.sock 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1024324 ']' 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:49.679 20:15:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.679 [2024-07-15 20:15:46.906613] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:20:49.679 [2024-07-15 20:15:46.906669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024324 ] 00:20:49.679 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.679 [2024-07-15 20:15:46.955222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.679 [2024-07-15 20:15:47.006808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.268 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:50.268 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:50.268 20:15:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AilF3z35eE 00:20:50.528 [2024-07-15 20:15:47.807607] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:50.528 [2024-07-15 20:15:47.807661] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:50.528 [2024-07-15 20:15:47.813885] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:50.528 [2024-07-15 20:15:47.813901] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:50.528 [2024-07-15 20:15:47.813919] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:50.528 [2024-07-15 20:15:47.814668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cdec0 (107): Transport endpoint is not connected 00:20:50.528 [2024-07-15 20:15:47.815664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cdec0 (9): Bad file descriptor 00:20:50.528 [2024-07-15 20:15:47.816668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:50.528 [2024-07-15 20:15:47.816674] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:50.528 [2024-07-15 20:15:47.816682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:50.528 request: 00:20:50.528 { 00:20:50.528 "name": "TLSTEST", 00:20:50.528 "trtype": "tcp", 00:20:50.528 "traddr": "10.0.0.2", 00:20:50.528 "adrfam": "ipv4", 00:20:50.528 "trsvcid": "4420", 00:20:50.528 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:50.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:50.528 "prchk_reftag": false, 00:20:50.528 "prchk_guard": false, 00:20:50.528 "hdgst": false, 00:20:50.528 "ddgst": false, 00:20:50.528 "psk": "/tmp/tmp.AilF3z35eE", 00:20:50.528 "method": "bdev_nvme_attach_controller", 00:20:50.528 "req_id": 1 00:20:50.528 } 00:20:50.528 Got JSON-RPC error response 00:20:50.528 response: 00:20:50.528 { 00:20:50.528 "code": -5, 00:20:50.528 "message": "Input/output error" 00:20:50.528 } 00:20:50.528 20:15:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1024324 00:20:50.528 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1024324 ']' 00:20:50.528 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1024324 00:20:50.528 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:50.528 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:50.528 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1024324 00:20:50.528 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:50.528 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:50.528 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1024324' 00:20:50.528 killing process with pid 1024324 00:20:50.528 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1024324 00:20:50.528 Received shutdown signal, test time was about 10.000000 seconds 00:20:50.528 00:20:50.528 Latency(us) 00:20:50.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.528 =================================================================================================================== 00:20:50.528 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:50.528 [2024-07-15 20:15:47.902442] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:50.528 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1024324 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1024386 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:50.788 20:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1024386 /var/tmp/bdevperf.sock 00:20:50.789 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1024386 ']' 00:20:50.789 20:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:50.789 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.789 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.789 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.789 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.789 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.789 [2024-07-15 20:15:48.060849] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:20:50.789 [2024-07-15 20:15:48.060906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024386 ] 00:20:50.789 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.789 [2024-07-15 20:15:48.109605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.789 [2024-07-15 20:15:48.161560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.730 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.730 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:51.730 20:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:51.730 [2024-07-15 20:15:48.966712] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:51.730 [2024-07-15 20:15:48.968195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe34a0 (9): Bad file descriptor 00:20:51.730 [2024-07-15 20:15:48.969195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:51.730 [2024-07-15 20:15:48.969202] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:51.730 [2024-07-15 20:15:48.969209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:51.730 request: 00:20:51.730 { 00:20:51.730 "name": "TLSTEST", 00:20:51.730 "trtype": "tcp", 00:20:51.730 "traddr": "10.0.0.2", 00:20:51.730 "adrfam": "ipv4", 00:20:51.730 "trsvcid": "4420", 00:20:51.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:51.730 "prchk_reftag": false, 00:20:51.730 "prchk_guard": false, 00:20:51.730 "hdgst": false, 00:20:51.730 "ddgst": false, 00:20:51.730 "method": "bdev_nvme_attach_controller", 00:20:51.730 "req_id": 1 00:20:51.730 } 00:20:51.730 Got JSON-RPC error response 00:20:51.730 response: 00:20:51.730 { 00:20:51.730 "code": -5, 00:20:51.730 "message": "Input/output error" 00:20:51.730 } 00:20:51.730 20:15:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1024386 00:20:51.730 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1024386 ']' 00:20:51.730 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1024386 00:20:51.730 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:51.730 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:51.730 20:15:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1024386 00:20:51.730 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:51.730 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:51.730 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1024386' 00:20:51.730 killing process with pid 1024386 00:20:51.730 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1024386 00:20:51.730 Received shutdown signal, test time was about 10.000000 seconds 00:20:51.730 00:20:51.730 Latency(us) 00:20:51.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.730 =================================================================================================================== 00:20:51.730 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:51.730 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1024386 00:20:51.730 20:15:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:51.730 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:51.730 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:51.730 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:51.730 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:51.730 20:15:49 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1018771 00:20:51.730 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1018771 ']' 00:20:51.730 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1018771 00:20:51.730 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:51.730 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:51.730 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1018771 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1018771' 00:20:51.990 killing process with pid 1018771 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1018771 00:20:51.990 [2024-07-15 20:15:49.199813] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1018771 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.trlwBjPZ8X 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.trlwBjPZ8X 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1024706 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1024706 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1024706 ']' 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:51.990 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.991 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:51.991 20:15:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.250 [2024-07-15 20:15:49.428443] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:20:52.250 [2024-07-15 20:15:49.428505] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.250 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.250 [2024-07-15 20:15:49.512701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.250 [2024-07-15 20:15:49.570858] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.250 [2024-07-15 20:15:49.570894] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.250 [2024-07-15 20:15:49.570900] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.250 [2024-07-15 20:15:49.570904] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.250 [2024-07-15 20:15:49.570908] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.251 [2024-07-15 20:15:49.570928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.822 20:15:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:52.822 20:15:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:52.822 20:15:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:52.822 20:15:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:52.822 20:15:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.822 20:15:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.822 20:15:50 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.trlwBjPZ8X 00:20:52.822 20:15:50 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.trlwBjPZ8X 00:20:52.822 20:15:50 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:53.082 [2024-07-15 20:15:50.377328] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.082 20:15:50 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:53.342 20:15:50 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:53.342 [2024-07-15 20:15:50.690094] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:53.342 [2024-07-15 20:15:50.690266] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.342 20:15:50 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:53.602 malloc0 00:20:53.602 20:15:50 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:53.602 20:15:51 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.trlwBjPZ8X 00:20:53.862 [2024-07-15 20:15:51.153181] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:53.862 20:15:51 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.trlwBjPZ8X 00:20:53.862 20:15:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:53.862 20:15:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:53.862 20:15:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:53.862 20:15:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.trlwBjPZ8X' 00:20:53.862 20:15:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:53.862 20:15:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:53.862 20:15:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1025066 00:20:53.862 20:15:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:53.862 20:15:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1025066 /var/tmp/bdevperf.sock 00:20:53.862 20:15:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1025066 ']' 00:20:53.862 20:15:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.862 20:15:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.862 20:15:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.862 20:15:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.862 20:15:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.862 [2024-07-15 20:15:51.199560] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:20:53.862 [2024-07-15 20:15:51.199608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025066 ] 00:20:53.862 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.862 [2024-07-15 20:15:51.248386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.122 [2024-07-15 20:15:51.300561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.122 20:15:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.122 20:15:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:54.122 20:15:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.trlwBjPZ8X 00:20:54.122 [2024-07-15 20:15:51.515823] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:54.122 [2024-07-15 20:15:51.515918] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:54.383 TLSTESTn1 00:20:54.383 20:15:51 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:54.383 Running I/O for 10 seconds... 00:21:04.379 00:21:04.379 Latency(us) 00:21:04.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.379 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:04.379 Verification LBA range: start 0x0 length 0x2000 00:21:04.379 TLSTESTn1 : 10.06 2535.69 9.91 0.00 0.00 50321.30 6171.31 151169.71 00:21:04.379 =================================================================================================================== 00:21:04.379 Total : 2535.69 9.91 0.00 0.00 50321.30 6171.31 151169.71 00:21:04.379 0 00:21:04.379 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:04.379 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1025066 00:21:04.379 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1025066 ']' 00:21:04.379 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1025066 00:21:04.379 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:04.379 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:04.379 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1025066 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1025066' 00:21:04.641 killing process with pid 1025066 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1025066 00:21:04.641 Received shutdown signal, test time was about 10.000000 seconds 00:21:04.641 00:21:04.641 Latency(us) 00:21:04.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.641 =================================================================================================================== 00:21:04.641 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.641 [2024-07-15 20:16:01.856803] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1025066 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.trlwBjPZ8X 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.trlwBjPZ8X 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.trlwBjPZ8X 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.trlwBjPZ8X 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.trlwBjPZ8X' 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1027134 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1027134 /var/tmp/bdevperf.sock 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1027134 ']' 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.641 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.641 [2024-07-15 20:16:02.026126] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:21:04.641 [2024-07-15 20:16:02.026182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027134 ] 00:21:04.641 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.901 [2024-07-15 20:16:02.075083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.901 [2024-07-15 20:16:02.126958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.473 20:16:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.473 20:16:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:05.473 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.trlwBjPZ8X 00:21:05.733 [2024-07-15 20:16:02.931820] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.733 [2024-07-15 20:16:02.931856] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:05.733 [2024-07-15 20:16:02.931862] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.trlwBjPZ8X 00:21:05.733 request: 00:21:05.733 { 00:21:05.733 "name": "TLSTEST", 00:21:05.733 "trtype": "tcp", 00:21:05.733 "traddr": "10.0.0.2", 00:21:05.733 "adrfam": "ipv4", 00:21:05.733 "trsvcid": "4420", 00:21:05.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.733 "prchk_reftag": false, 00:21:05.733 "prchk_guard": false, 00:21:05.733 "hdgst": false, 00:21:05.733 "ddgst": false, 00:21:05.733 "psk": "/tmp/tmp.trlwBjPZ8X", 00:21:05.733 "method": "bdev_nvme_attach_controller", 00:21:05.733 "req_id": 1 00:21:05.733 } 00:21:05.733 Got JSON-RPC error response 00:21:05.733 response: 00:21:05.733 { 00:21:05.733 "code": -1, 00:21:05.733 "message": "Operation not permitted" 00:21:05.733 } 00:21:05.733 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1027134 00:21:05.733 20:16:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1027134 ']' 00:21:05.733 20:16:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1027134 00:21:05.733 20:16:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:05.733 20:16:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:05.733 20:16:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1027134 00:21:05.733 20:16:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:05.733 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:05.733 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1027134' 00:21:05.733 killing process with pid 1027134 00:21:05.733 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1027134 00:21:05.733 Received shutdown signal, test time was about 10.000000 seconds 00:21:05.733 00:21:05.733 Latency(us) 00:21:05.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.733 =================================================================================================================== 00:21:05.733 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:05.733 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1027134 00:21:05.733 20:16:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:05.733 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:05.733 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:05.733 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:05.733 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:05.733 20:16:03 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1024706 00:21:05.733 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1024706 ']' 00:21:05.733 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1024706 00:21:05.733 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:05.733 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:05.733 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1024706 00:21:05.733 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:05.734 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:05.734 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1024706' 00:21:05.734 killing process with pid 1024706 00:21:05.734 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1024706 00:21:05.734 [2024-07-15 20:16:03.163560] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:05.734 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1024706 00:21:05.994 20:16:03 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:05.994 20:16:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:05.994 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:05.994 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.994 20:16:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1027421 00:21:05.994 20:16:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1027421 00:21:05.994 20:16:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:05.994 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1027421 ']' 00:21:05.994 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.994 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.994 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.994 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.994 20:16:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.994 [2024-07-15 20:16:03.339341] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:21:05.994 [2024-07-15 20:16:03.339398] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.994 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.994 [2024-07-15 20:16:03.418563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.254 [2024-07-15 20:16:03.471506] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.254 [2024-07-15 20:16:03.471537] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.254 [2024-07-15 20:16:03.471542] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.254 [2024-07-15 20:16:03.471546] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.254 [2024-07-15 20:16:03.471550] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.254 [2024-07-15 20:16:03.471569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.863 20:16:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:06.863 20:16:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:06.863 20:16:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:06.863 20:16:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:06.863 20:16:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.863 20:16:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.863 20:16:04 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.trlwBjPZ8X 00:21:06.863 20:16:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:06.863 20:16:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.trlwBjPZ8X 00:21:06.863 20:16:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:06.863 20:16:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.863 20:16:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:06.863 20:16:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.863 20:16:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.trlwBjPZ8X 00:21:06.863 20:16:04 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.trlwBjPZ8X 00:21:06.863 20:16:04 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:07.135 [2024-07-15 20:16:04.285177] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.135 20:16:04 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:07.135 20:16:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:07.395 [2024-07-15 20:16:04.577882] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:07.395 [2024-07-15 20:16:04.578038] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.395 20:16:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:07.395 malloc0 00:21:07.395 20:16:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:07.655 20:16:04 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.trlwBjPZ8X 00:21:07.655 [2024-07-15 20:16:05.036928] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:07.655 [2024-07-15 20:16:05.036946] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:07.655 [2024-07-15 20:16:05.036966] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:07.655 request: 00:21:07.655 { 00:21:07.655 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.655 "host": "nqn.2016-06.io.spdk:host1", 00:21:07.655 "psk": "/tmp/tmp.trlwBjPZ8X", 00:21:07.655 "method": "nvmf_subsystem_add_host", 00:21:07.655 "req_id": 1 00:21:07.655 } 00:21:07.655 Got JSON-RPC error response 00:21:07.655 response: 00:21:07.655 { 00:21:07.655 "code": -32603, 00:21:07.655 "message": "Internal error" 00:21:07.655 } 00:21:07.655 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:07.655 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:07.655 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:07.656 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:07.656 20:16:05 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1027421 00:21:07.656 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1027421 ']' 00:21:07.656 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1027421 00:21:07.656 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:07.656 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:07.656 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1027421 00:21:07.916 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:07.916 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:07.916 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1027421' 00:21:07.916 killing process with pid 1027421 00:21:07.916 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1027421 00:21:07.916 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1027421 00:21:07.916 20:16:05 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.trlwBjPZ8X 00:21:07.916 20:16:05 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:07.916 20:16:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:07.916 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:07.916 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.916 20:16:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1027793 00:21:07.916 20:16:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1027793 00:21:07.916 20:16:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:07.916 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1027793 ']' 00:21:07.916 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.916 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.916 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.917 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.917 20:16:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.917 [2024-07-15 20:16:05.296140] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:21:07.917 [2024-07-15 20:16:05.296198] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.917 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.177 [2024-07-15 20:16:05.378731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.177 [2024-07-15 20:16:05.433874] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.177 [2024-07-15 20:16:05.433906] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.177 [2024-07-15 20:16:05.433911] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.177 [2024-07-15 20:16:05.433916] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.177 [2024-07-15 20:16:05.433920] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.177 [2024-07-15 20:16:05.433935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.750 20:16:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.750 20:16:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:08.750 20:16:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:08.750 20:16:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:08.750 20:16:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.750 20:16:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.750 20:16:06 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.trlwBjPZ8X 00:21:08.750 20:16:06 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.trlwBjPZ8X 00:21:08.750 20:16:06 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:09.011 [2024-07-15 20:16:06.243986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.011 20:16:06 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:09.011 20:16:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:09.271 [2024-07-15 20:16:06.536696] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:09.272 [2024-07-15 20:16:06.536842] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.272 20:16:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:09.272 malloc0 00:21:09.272 20:16:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:09.531 20:16:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.trlwBjPZ8X 00:21:09.791 [2024-07-15 20:16:06.967485] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:09.791 20:16:06 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:09.791 20:16:06 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1028162 00:21:09.791 20:16:06 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:09.791 20:16:06 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1028162 /var/tmp/bdevperf.sock 00:21:09.791 20:16:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1028162 ']' 00:21:09.791 20:16:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.791 20:16:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:09.791 20:16:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.791 20:16:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:09.791 20:16:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.791 [2024-07-15 20:16:07.011693] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:21:09.791 [2024-07-15 20:16:07.011745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028162 ] 00:21:09.791 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.791 [2024-07-15 20:16:07.062762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.791 [2024-07-15 20:16:07.116022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.791 20:16:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.791 20:16:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:09.791 20:16:07 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.trlwBjPZ8X 00:21:10.052 [2024-07-15 20:16:07.335317] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:10.052 [2024-07-15 20:16:07.335385] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:10.052 TLSTESTn1 00:21:10.052 20:16:07 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:10.313 20:16:07 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:10.313 "subsystems": [ 00:21:10.313 { 00:21:10.313 "subsystem": "keyring", 00:21:10.313 "config": [] 00:21:10.313 }, 00:21:10.313 { 00:21:10.313 "subsystem": "iobuf", 00:21:10.313 "config": [ 00:21:10.313 { 00:21:10.313 "method": "iobuf_set_options", 00:21:10.313 "params": { 00:21:10.313 "small_pool_count": 8192, 00:21:10.313 "large_pool_count": 1024, 00:21:10.313 "small_bufsize": 8192, 00:21:10.313 "large_bufsize": 135168 00:21:10.313 } 00:21:10.313 } 00:21:10.313 ] 00:21:10.313 }, 00:21:10.313 { 00:21:10.313 "subsystem": "sock", 00:21:10.313 "config": [ 00:21:10.313 { 00:21:10.313 "method": "sock_set_default_impl", 00:21:10.313 "params": { 00:21:10.313 "impl_name": "posix" 00:21:10.313 } 00:21:10.313 }, 00:21:10.313 { 00:21:10.313 "method": "sock_impl_set_options", 00:21:10.313 "params": { 00:21:10.313 "impl_name": "ssl", 00:21:10.313 "recv_buf_size": 4096, 00:21:10.313 "send_buf_size": 4096, 00:21:10.313 "enable_recv_pipe": true, 00:21:10.313 "enable_quickack": false, 00:21:10.313 "enable_placement_id": 0, 00:21:10.313 "enable_zerocopy_send_server": true, 00:21:10.313 "enable_zerocopy_send_client": false, 00:21:10.313 "zerocopy_threshold": 0, 00:21:10.313 "tls_version": 0, 00:21:10.313 "enable_ktls": false 00:21:10.313 } 00:21:10.313 }, 00:21:10.313 { 00:21:10.313 "method": "sock_impl_set_options", 00:21:10.313 "params": { 00:21:10.313 "impl_name": "posix", 00:21:10.313 "recv_buf_size": 2097152, 00:21:10.313 "send_buf_size": 2097152, 00:21:10.313 "enable_recv_pipe": true, 00:21:10.313 "enable_quickack": false, 00:21:10.313 "enable_placement_id": 0, 00:21:10.313 "enable_zerocopy_send_server": true, 00:21:10.313 "enable_zerocopy_send_client": false, 00:21:10.313 "zerocopy_threshold": 0, 00:21:10.313 "tls_version": 0, 00:21:10.313 "enable_ktls": false 00:21:10.313 } 00:21:10.313 } 00:21:10.313 ] 00:21:10.313 }, 00:21:10.313 { 00:21:10.313 "subsystem": "vmd", 00:21:10.313 "config": [] 00:21:10.313 }, 00:21:10.313 { 00:21:10.313 "subsystem": "accel", 00:21:10.313 "config": [ 00:21:10.313 { 00:21:10.313 "method": "accel_set_options", 00:21:10.313 "params": { 00:21:10.313 "small_cache_size": 128, 00:21:10.313 "large_cache_size": 16, 00:21:10.313 "task_count": 2048, 00:21:10.313 "sequence_count": 2048, 00:21:10.313 "buf_count": 2048 00:21:10.313 } 00:21:10.313 } 00:21:10.313 ] 00:21:10.313 }, 00:21:10.313 { 00:21:10.313 "subsystem": "bdev", 00:21:10.313 "config": [ 00:21:10.313 { 00:21:10.313 "method": "bdev_set_options", 00:21:10.313 "params": { 00:21:10.313 "bdev_io_pool_size": 65535, 00:21:10.313 "bdev_io_cache_size": 256, 00:21:10.313 "bdev_auto_examine": true, 00:21:10.313 "iobuf_small_cache_size": 128, 00:21:10.313 "iobuf_large_cache_size": 16 00:21:10.313 } 00:21:10.313 }, 00:21:10.313 { 00:21:10.313 "method": "bdev_raid_set_options", 00:21:10.313 "params": { 00:21:10.313 "process_window_size_kb": 1024 00:21:10.313 } 00:21:10.313 }, 00:21:10.313 { 00:21:10.313 "method": "bdev_iscsi_set_options", 00:21:10.313 "params": { 00:21:10.313 "timeout_sec": 30 00:21:10.313 } 00:21:10.313 }, 00:21:10.313 { 00:21:10.313 "method": "bdev_nvme_set_options", 00:21:10.313 "params": { 00:21:10.313 "action_on_timeout": "none", 00:21:10.313 "timeout_us": 0, 00:21:10.313 "timeout_admin_us": 0, 00:21:10.313 "keep_alive_timeout_ms": 10000, 00:21:10.313 "arbitration_burst": 0, 00:21:10.313 "low_priority_weight": 0, 00:21:10.313 "medium_priority_weight": 0, 00:21:10.313 "high_priority_weight": 0, 00:21:10.313 "nvme_adminq_poll_period_us": 10000, 00:21:10.313 "nvme_ioq_poll_period_us": 0, 00:21:10.313 "io_queue_requests": 0, 00:21:10.313 "delay_cmd_submit": true, 00:21:10.313 "transport_retry_count": 4, 00:21:10.313 "bdev_retry_count": 3, 00:21:10.313 "transport_ack_timeout": 0, 00:21:10.313 "ctrlr_loss_timeout_sec": 0, 00:21:10.313 "reconnect_delay_sec": 0, 00:21:10.313 "fast_io_fail_timeout_sec": 0, 00:21:10.313 "disable_auto_failback": false, 00:21:10.313 "generate_uuids": false, 00:21:10.313 "transport_tos": 0, 00:21:10.313 "nvme_error_stat": false, 00:21:10.313 "rdma_srq_size": 0, 00:21:10.313 "io_path_stat": false, 00:21:10.313 "allow_accel_sequence": false, 00:21:10.313 "rdma_max_cq_size": 0, 00:21:10.313 "rdma_cm_event_timeout_ms": 0, 00:21:10.313 "dhchap_digests": [ 00:21:10.313 "sha256", 00:21:10.313 "sha384", 00:21:10.313 "sha512" 00:21:10.313 ], 00:21:10.313 "dhchap_dhgroups": [ 00:21:10.313 "null", 00:21:10.313 "ffdhe2048", 00:21:10.313 "ffdhe3072", 00:21:10.313 "ffdhe4096", 00:21:10.313 "ffdhe6144", 00:21:10.313 "ffdhe8192" 00:21:10.313 ] 00:21:10.313 } 00:21:10.313 }, 00:21:10.313 { 00:21:10.313 "method": "bdev_nvme_set_hotplug", 00:21:10.313 "params": { 00:21:10.313 "period_us": 100000, 00:21:10.313 "enable": false 00:21:10.313 } 00:21:10.313 }, 00:21:10.313 { 00:21:10.314 "method": "bdev_malloc_create", 00:21:10.314 "params": { 00:21:10.314 "name": "malloc0", 00:21:10.314 "num_blocks": 8192, 00:21:10.314 "block_size": 4096, 00:21:10.314 "physical_block_size": 4096, 00:21:10.314 "uuid": "b9a0f0e0-ec0f-4ced-aefa-32294c34363e", 00:21:10.314 "optimal_io_boundary": 0 00:21:10.314 } 00:21:10.314 }, 00:21:10.314 { 00:21:10.314 "method": "bdev_wait_for_examine" 00:21:10.314 } 00:21:10.314 ] 00:21:10.314 }, 00:21:10.314 { 00:21:10.314 "subsystem": "nbd", 00:21:10.314 "config": [] 00:21:10.314 }, 00:21:10.314 { 00:21:10.314 "subsystem": "scheduler", 00:21:10.314 "config": [ 00:21:10.314 { 00:21:10.314 "method": "framework_set_scheduler", 00:21:10.314 "params": { 00:21:10.314 "name": "static" 00:21:10.314 } 00:21:10.314 } 00:21:10.314 ] 00:21:10.314 }, 00:21:10.314 { 00:21:10.314 "subsystem": "nvmf", 00:21:10.314 "config": [ 00:21:10.314 { 00:21:10.314 "method": "nvmf_set_config", 00:21:10.314 "params": { 00:21:10.314 "discovery_filter": "match_any", 00:21:10.314 "admin_cmd_passthru": { 00:21:10.314 "identify_ctrlr": false 00:21:10.314 } 00:21:10.314 } 00:21:10.314 }, 00:21:10.314 { 00:21:10.314 "method": "nvmf_set_max_subsystems", 00:21:10.314 "params": { 00:21:10.314 "max_subsystems": 1024 00:21:10.314 } 00:21:10.314 }, 00:21:10.314 { 00:21:10.314 "method": "nvmf_set_crdt", 00:21:10.314 "params": { 00:21:10.314 "crdt1": 0, 00:21:10.314 "crdt2": 0, 00:21:10.314 "crdt3": 0 00:21:10.314 } 00:21:10.314 }, 00:21:10.314 { 00:21:10.314 "method": "nvmf_create_transport", 00:21:10.314 "params": { 00:21:10.314 "trtype": "TCP", 00:21:10.314 "max_queue_depth": 128, 00:21:10.314 "max_io_qpairs_per_ctrlr": 127, 00:21:10.314 "in_capsule_data_size": 4096, 00:21:10.314 "max_io_size": 131072, 00:21:10.314 "io_unit_size": 131072, 00:21:10.314 "max_aq_depth": 128, 00:21:10.314 "num_shared_buffers": 511, 00:21:10.314 "buf_cache_size": 4294967295, 00:21:10.314 "dif_insert_or_strip": false, 00:21:10.314 "zcopy": false, 00:21:10.314 "c2h_success": false, 00:21:10.314 "sock_priority": 0, 00:21:10.314 "abort_timeout_sec": 1, 00:21:10.314 "ack_timeout": 0, 00:21:10.314 "data_wr_pool_size": 0 00:21:10.314 } 00:21:10.314 }, 00:21:10.314 { 00:21:10.314 "method": "nvmf_create_subsystem", 00:21:10.314 "params": { 00:21:10.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.314 "allow_any_host": false, 00:21:10.314 "serial_number": "SPDK00000000000001", 00:21:10.314 "model_number": "SPDK bdev Controller", 00:21:10.314 "max_namespaces": 10, 00:21:10.314 "min_cntlid": 1, 00:21:10.314 "max_cntlid": 65519, 00:21:10.314 "ana_reporting": false 00:21:10.314 } 00:21:10.314 }, 00:21:10.314 { 00:21:10.314 "method": "nvmf_subsystem_add_host", 00:21:10.314 "params": { 00:21:10.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.314 "host": "nqn.2016-06.io.spdk:host1", 00:21:10.314 "psk": "/tmp/tmp.trlwBjPZ8X" 00:21:10.314 } 00:21:10.314 }, 00:21:10.314 { 00:21:10.314 "method": "nvmf_subsystem_add_ns", 00:21:10.314 "params": { 00:21:10.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.314 "namespace": { 00:21:10.314 "nsid": 1, 00:21:10.314 "bdev_name": "malloc0", 00:21:10.314 "nguid": "B9A0F0E0EC0F4CEDAEFA32294C34363E", 00:21:10.314 "uuid": "b9a0f0e0-ec0f-4ced-aefa-32294c34363e", 00:21:10.314 "no_auto_visible": false 00:21:10.314 } 00:21:10.314 } 00:21:10.314 }, 00:21:10.314 { 00:21:10.314 "method": "nvmf_subsystem_add_listener", 00:21:10.314 "params": { 00:21:10.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.314 "listen_address": { 00:21:10.314 "trtype": "TCP", 00:21:10.314 "adrfam": "IPv4", 00:21:10.314 "traddr": "10.0.0.2", 00:21:10.314 "trsvcid": "4420" 00:21:10.314 }, 00:21:10.314 "secure_channel": true 00:21:10.314 } 00:21:10.314 } 00:21:10.314 ] 00:21:10.314 } 00:21:10.314 ] 00:21:10.314 }' 00:21:10.314 20:16:07 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:10.575 20:16:07 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:10.575 "subsystems": [ 00:21:10.575 { 00:21:10.575 "subsystem": "keyring", 00:21:10.575 "config": [] 00:21:10.575 }, 00:21:10.575 { 00:21:10.575 "subsystem": "iobuf", 00:21:10.575 "config": [ 00:21:10.575 { 00:21:10.575 "method": "iobuf_set_options", 00:21:10.575 "params": { 00:21:10.575 "small_pool_count": 8192, 00:21:10.575 "large_pool_count": 1024, 00:21:10.575 "small_bufsize": 8192, 00:21:10.575 "large_bufsize": 135168 00:21:10.575 } 00:21:10.575 } 00:21:10.575 ] 00:21:10.575 }, 00:21:10.575 { 00:21:10.575 "subsystem": "sock", 00:21:10.575 "config": [ 00:21:10.575 { 00:21:10.575 "method": "sock_set_default_impl", 00:21:10.575 "params": { 00:21:10.575 "impl_name": "posix" 00:21:10.575 } 00:21:10.575 }, 00:21:10.575 { 00:21:10.575 "method": "sock_impl_set_options", 00:21:10.575 "params": { 00:21:10.575 "impl_name": "ssl", 00:21:10.575 "recv_buf_size": 4096, 00:21:10.575 "send_buf_size": 4096, 00:21:10.575 "enable_recv_pipe": true, 00:21:10.575 "enable_quickack": false, 00:21:10.575 "enable_placement_id": 0, 00:21:10.575 "enable_zerocopy_send_server": true, 00:21:10.575 "enable_zerocopy_send_client": false, 00:21:10.575 "zerocopy_threshold": 0, 00:21:10.575 "tls_version": 0, 00:21:10.575 "enable_ktls": false 00:21:10.575 } 00:21:10.575 }, 00:21:10.575 { 00:21:10.575 "method": "sock_impl_set_options", 00:21:10.575 "params": { 00:21:10.575 "impl_name": "posix", 00:21:10.575 "recv_buf_size": 2097152, 00:21:10.575 "send_buf_size": 2097152, 00:21:10.575 "enable_recv_pipe": true, 00:21:10.575 "enable_quickack": false, 00:21:10.575 "enable_placement_id": 0, 00:21:10.575 "enable_zerocopy_send_server": true, 00:21:10.575 "enable_zerocopy_send_client": false, 00:21:10.575 "zerocopy_threshold": 0, 00:21:10.575 "tls_version": 0, 00:21:10.575 "enable_ktls": false 00:21:10.575 } 00:21:10.575 } 00:21:10.575 ] 00:21:10.575 }, 00:21:10.575 { 00:21:10.575 "subsystem": "vmd", 00:21:10.575 "config": [] 00:21:10.575 }, 00:21:10.575 { 00:21:10.575 "subsystem": "accel", 00:21:10.575 "config": [ 00:21:10.575 { 00:21:10.575 "method": "accel_set_options", 00:21:10.575 "params": { 00:21:10.575 "small_cache_size": 128, 00:21:10.575 "large_cache_size": 16, 00:21:10.575 "task_count": 2048, 00:21:10.575 "sequence_count": 2048, 00:21:10.575 "buf_count": 2048 00:21:10.575 } 00:21:10.575 } 00:21:10.575 ] 00:21:10.575 }, 00:21:10.575 { 00:21:10.575 "subsystem": "bdev", 00:21:10.575 "config": [ 00:21:10.575 { 00:21:10.575 "method": "bdev_set_options", 00:21:10.575 "params": { 00:21:10.575 "bdev_io_pool_size": 65535, 00:21:10.575 "bdev_io_cache_size": 256, 00:21:10.575 "bdev_auto_examine": true, 00:21:10.575 "iobuf_small_cache_size": 128, 00:21:10.575 "iobuf_large_cache_size": 16 00:21:10.575 } 00:21:10.575 }, 00:21:10.575 { 00:21:10.575 "method": "bdev_raid_set_options", 00:21:10.575 "params": { 00:21:10.575 "process_window_size_kb": 1024 00:21:10.575 } 00:21:10.575 }, 00:21:10.575 { 00:21:10.575 "method": "bdev_iscsi_set_options", 00:21:10.575 "params": { 00:21:10.575 "timeout_sec": 30 00:21:10.575 } 00:21:10.575 }, 00:21:10.575 { 00:21:10.575 "method": "bdev_nvme_set_options", 00:21:10.575 "params": { 00:21:10.575 "action_on_timeout": "none", 00:21:10.575 "timeout_us": 0, 00:21:10.575 "timeout_admin_us": 0, 00:21:10.575 "keep_alive_timeout_ms": 10000, 00:21:10.575 "arbitration_burst": 0, 00:21:10.575 "low_priority_weight": 0, 00:21:10.575 "medium_priority_weight": 0, 00:21:10.575 "high_priority_weight": 0, 00:21:10.576 "nvme_adminq_poll_period_us": 10000, 00:21:10.576 "nvme_ioq_poll_period_us": 0, 00:21:10.576 "io_queue_requests": 512, 00:21:10.576 "delay_cmd_submit": true, 00:21:10.576 "transport_retry_count": 4, 00:21:10.576 "bdev_retry_count": 3, 00:21:10.576 "transport_ack_timeout": 0, 00:21:10.576 "ctrlr_loss_timeout_sec": 0, 00:21:10.576 "reconnect_delay_sec": 0, 00:21:10.576 "fast_io_fail_timeout_sec": 0, 00:21:10.576 "disable_auto_failback": false, 00:21:10.576 "generate_uuids": false, 00:21:10.576 "transport_tos": 0, 00:21:10.576 "nvme_error_stat": false, 00:21:10.576 "rdma_srq_size": 0, 00:21:10.576 "io_path_stat": false, 00:21:10.576 "allow_accel_sequence": false, 00:21:10.576 "rdma_max_cq_size": 0, 00:21:10.576 "rdma_cm_event_timeout_ms": 0, 00:21:10.576 "dhchap_digests": [ 00:21:10.576 "sha256", 00:21:10.576 "sha384", 00:21:10.576 "sha512" 00:21:10.576 ], 00:21:10.576 "dhchap_dhgroups": [ 00:21:10.576 "null", 00:21:10.576 "ffdhe2048", 00:21:10.576 "ffdhe3072", 00:21:10.576 "ffdhe4096", 00:21:10.576 "ffdhe6144", 00:21:10.576 "ffdhe8192" 00:21:10.576 ] 00:21:10.576 } 00:21:10.576 }, 00:21:10.576 { 00:21:10.576 "method": "bdev_nvme_attach_controller", 00:21:10.576 "params": { 00:21:10.576 "name": "TLSTEST", 00:21:10.576 "trtype": "TCP", 00:21:10.576 "adrfam": "IPv4", 00:21:10.576 "traddr": "10.0.0.2", 00:21:10.576 "trsvcid": "4420", 00:21:10.576 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.576 "prchk_reftag": false, 00:21:10.576 "prchk_guard": false, 00:21:10.576 "ctrlr_loss_timeout_sec": 0, 00:21:10.576 "reconnect_delay_sec": 0, 00:21:10.576 "fast_io_fail_timeout_sec": 0, 00:21:10.576 "psk": "/tmp/tmp.trlwBjPZ8X", 00:21:10.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:10.576 "hdgst": false, 00:21:10.576 "ddgst": false 00:21:10.576 } 00:21:10.576 }, 00:21:10.576 { 00:21:10.576 "method": "bdev_nvme_set_hotplug", 00:21:10.576 "params": { 00:21:10.576 "period_us": 100000, 00:21:10.576 "enable": false 00:21:10.576 } 00:21:10.576 }, 00:21:10.576 { 00:21:10.576 "method": "bdev_wait_for_examine" 00:21:10.576 } 00:21:10.576 ] 00:21:10.576 }, 00:21:10.576 { 00:21:10.576 "subsystem": "nbd", 00:21:10.576 "config": [] 00:21:10.576 } 00:21:10.576 ] 00:21:10.576 }' 00:21:10.576 20:16:07 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1028162 00:21:10.576 20:16:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1028162 ']' 00:21:10.576 20:16:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1028162 00:21:10.576 20:16:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:10.576 20:16:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:10.576 20:16:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1028162 00:21:10.576 20:16:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:10.576 20:16:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:10.576 20:16:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1028162' 00:21:10.576 killing process with pid 1028162 00:21:10.576 20:16:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1028162 00:21:10.576 Received shutdown signal, test time was about 10.000000 seconds 00:21:10.576 00:21:10.576 Latency(us) 00:21:10.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.576 =================================================================================================================== 00:21:10.576 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:10.576 [2024-07-15 20:16:07.972658] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:10.576 20:16:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1028162 00:21:10.836 20:16:08 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1027793 00:21:10.836 20:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1027793 ']' 00:21:10.837 20:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1027793 00:21:10.837 20:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:10.837 20:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:10.837 20:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1027793 00:21:10.837 20:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:10.837 20:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:10.837 20:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1027793' 00:21:10.837 killing process with pid 1027793 00:21:10.837 20:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1027793 00:21:10.837 [2024-07-15 20:16:08.139656] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:10.837 20:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1027793 00:21:10.837 20:16:08 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:10.837 20:16:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:10.837 20:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:10.837 20:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.837 20:16:08 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:10.837 "subsystems": [ 00:21:10.837 { 00:21:10.837 "subsystem": "keyring", 00:21:10.837 "config": [] 00:21:10.837 }, 00:21:10.837 { 00:21:10.837 "subsystem": "iobuf", 00:21:10.837 "config": [ 00:21:10.837 { 00:21:10.837 "method": "iobuf_set_options", 00:21:10.837 "params": { 00:21:10.837 "small_pool_count": 8192, 00:21:10.837 "large_pool_count": 1024, 00:21:10.837 "small_bufsize": 8192, 00:21:10.837 "large_bufsize": 135168 00:21:10.837 } 00:21:10.837 } 00:21:10.837 ] 00:21:10.837 }, 00:21:10.837 { 00:21:10.837 "subsystem": "sock", 00:21:10.837 "config": [ 00:21:10.837 { 00:21:10.837 "method": "sock_set_default_impl", 00:21:10.837 "params": { 00:21:10.837 "impl_name": "posix" 00:21:10.837 } 00:21:10.837 }, 00:21:10.837 { 00:21:10.837 "method": "sock_impl_set_options", 00:21:10.837 "params": { 00:21:10.837 "impl_name": "ssl", 00:21:10.837 "recv_buf_size": 4096, 00:21:10.837 "send_buf_size": 4096, 00:21:10.837 "enable_recv_pipe": true, 00:21:10.837 "enable_quickack": false, 00:21:10.837 "enable_placement_id": 0, 00:21:10.837 "enable_zerocopy_send_server": true, 00:21:10.837 "enable_zerocopy_send_client": false, 00:21:10.837 "zerocopy_threshold": 0, 00:21:10.837 "tls_version": 0, 00:21:10.837 "enable_ktls": false 00:21:10.837 } 00:21:10.837 }, 00:21:10.837 { 00:21:10.837 "method": "sock_impl_set_options", 00:21:10.837 "params": { 00:21:10.837 "impl_name": "posix", 00:21:10.837 "recv_buf_size": 2097152, 00:21:10.837 "send_buf_size": 2097152, 00:21:10.837 "enable_recv_pipe": true, 00:21:10.837 "enable_quickack": false, 00:21:10.837 "enable_placement_id": 0, 00:21:10.837 "enable_zerocopy_send_server": true, 00:21:10.837 "enable_zerocopy_send_client": false, 00:21:10.837 "zerocopy_threshold": 0, 00:21:10.837 "tls_version": 0, 00:21:10.837 "enable_ktls": false 00:21:10.837 } 00:21:10.837 } 00:21:10.837 ] 00:21:10.837 }, 00:21:10.837 { 00:21:10.837 "subsystem": "vmd", 00:21:10.837 "config": [] 00:21:10.837 }, 00:21:10.837 { 00:21:10.837 "subsystem": "accel", 00:21:10.837 "config": [ 00:21:10.837 { 00:21:10.837 "method": "accel_set_options", 00:21:10.837 "params": { 00:21:10.837 "small_cache_size": 128, 00:21:10.837 "large_cache_size": 16, 00:21:10.837 "task_count": 2048, 00:21:10.837 "sequence_count": 2048, 00:21:10.837 "buf_count": 2048 00:21:10.837 } 00:21:10.837 } 00:21:10.837 ] 00:21:10.837 }, 00:21:10.837 { 00:21:10.837 "subsystem": "bdev", 00:21:10.837 "config": [ 00:21:10.837 { 00:21:10.837 "method": "bdev_set_options", 00:21:10.837 "params": { 00:21:10.837 "bdev_io_pool_size": 65535, 00:21:10.837 "bdev_io_cache_size": 256, 00:21:10.837 "bdev_auto_examine": true, 00:21:10.837 "iobuf_small_cache_size": 128, 00:21:10.837 "iobuf_large_cache_size": 16 00:21:10.837 } 00:21:10.837 }, 00:21:10.837 { 00:21:10.837 "method": "bdev_raid_set_options", 00:21:10.837 "params": { 00:21:10.837 "process_window_size_kb": 1024 00:21:10.837 } 00:21:10.837 }, 00:21:10.837 { 00:21:10.837 "method": "bdev_iscsi_set_options", 00:21:10.837 "params": { 00:21:10.837 "timeout_sec": 30 00:21:10.837 } 00:21:10.837 }, 00:21:10.837 { 00:21:10.837 "method": "bdev_nvme_set_options", 00:21:10.837 "params": { 00:21:10.837 "action_on_timeout": "none", 00:21:10.837 "timeout_us": 0, 00:21:10.837 "timeout_admin_us": 0, 00:21:10.837 "keep_alive_timeout_ms": 10000, 00:21:10.837 "arbitration_burst": 0, 00:21:10.837 "low_priority_weight": 0, 00:21:10.837 "medium_priority_weight": 0, 00:21:10.837 "high_priority_weight": 0, 00:21:10.837 "nvme_adminq_poll_period_us": 10000, 00:21:10.837 "nvme_ioq_poll_period_us": 0, 00:21:10.837 "io_queue_requests": 0, 00:21:10.837 "delay_cmd_submit": true, 00:21:10.837 "transport_retry_count": 4, 00:21:10.837 "bdev_retry_count": 3, 00:21:10.837 "transport_ack_timeout": 0, 00:21:10.837 "ctrlr_loss_timeout_sec": 0, 00:21:10.837 "reconnect_delay_sec": 0, 00:21:10.837 "fast_io_fail_timeout_sec": 0, 00:21:10.837 "disable_auto_failback": false, 00:21:10.837 "generate_uuids": false, 00:21:10.837 "transport_tos": 0, 00:21:10.837 "nvme_error_stat": false, 00:21:10.837 "rdma_srq_size": 0, 00:21:10.837 "io_path_stat": false, 00:21:10.837 "allow_accel_sequence": false, 00:21:10.837 "rdma_max_cq_size": 0, 00:21:10.837 "rdma_cm_event_timeout_ms": 0, 00:21:10.837 "dhchap_digests": [ 00:21:10.837 "sha256", 00:21:10.837 "sha384", 00:21:10.837 "sha512" 00:21:10.837 ], 00:21:10.837 "dhchap_dhgroups": [ 00:21:10.837 "null", 00:21:10.837 "ffdhe2048", 00:21:10.837 "ffdhe3072", 00:21:10.837 "ffdhe4096", 00:21:10.837 "ffdhe6144", 00:21:10.837 "ffdhe8192" 00:21:10.837 ] 00:21:10.837 } 00:21:10.837 }, 00:21:10.837 { 00:21:10.837 "method": "bdev_nvme_set_hotplug", 00:21:10.837 "params": { 00:21:10.837 "period_us": 100000, 00:21:10.837 "enable": false 00:21:10.837 } 00:21:10.837 }, 00:21:10.837 { 00:21:10.837 "method": "bdev_malloc_create", 00:21:10.837 "params": { 00:21:10.837 "name": "malloc0", 00:21:10.837 "num_blocks": 8192, 00:21:10.837 "block_size": 4096, 00:21:10.837 "physical_block_size": 4096, 00:21:10.837 "uuid": "b9a0f0e0-ec0f-4ced-aefa-32294c34363e", 00:21:10.837 "optimal_io_boundary": 0 00:21:10.837 } 00:21:10.837 }, 00:21:10.837 { 00:21:10.837 "method": "bdev_wait_for_examine" 00:21:10.837 } 00:21:10.837 ] 00:21:10.837 }, 00:21:10.837 { 00:21:10.837 "subsystem": "nbd", 00:21:10.837 "config": [] 00:21:10.837 }, 00:21:10.837 { 00:21:10.837 "subsystem": "scheduler", 00:21:10.837 "config": [ 00:21:10.837 { 00:21:10.837 "method": "framework_set_scheduler", 00:21:10.837 "params": { 00:21:10.837 "name": "static" 00:21:10.837 } 00:21:10.837 } 00:21:10.837 ] 00:21:10.837 }, 00:21:10.837 { 00:21:10.837 "subsystem": "nvmf", 00:21:10.837 "config": [ 00:21:10.837 { 00:21:10.837 "method": "nvmf_set_config", 00:21:10.837 "params": { 00:21:10.837 "discovery_filter": "match_any", 00:21:10.837 "admin_cmd_passthru": { 00:21:10.837 "identify_ctrlr": false 00:21:10.837 } 00:21:10.837 } 00:21:10.837 }, 00:21:10.837 { 00:21:10.837 "method": "nvmf_set_max_subsystems", 00:21:10.837 "params": { 00:21:10.837 "max_subsystems": 1024 00:21:10.837 } 00:21:10.837 }, 00:21:10.837 { 00:21:10.837 "method": "nvmf_set_crdt", 00:21:10.837 "params": { 00:21:10.837 "crdt1": 0, 00:21:10.837 "crdt2": 0, 00:21:10.838 "crdt3": 0 00:21:10.838 } 00:21:10.838 }, 00:21:10.838 { 00:21:10.838 "method": "nvmf_create_transport", 00:21:10.838 "params": { 00:21:10.838 "trtype": "TCP", 00:21:10.838 "max_queue_depth": 128, 00:21:10.838 "max_io_qpairs_per_ctrlr": 127, 00:21:10.838 "in_capsule_data_size": 4096, 00:21:10.838 "max_io_size": 131072, 00:21:10.838 "io_unit_size": 131072, 00:21:10.838 "max_aq_depth": 128, 00:21:10.838 "num_shared_buffers": 511, 00:21:10.838 "buf_cache_size": 4294967295, 00:21:10.838 "dif_insert_or_strip": false, 00:21:10.838 "zcopy": false, 00:21:10.838 "c2h_success": false, 00:21:10.838 "sock_priority": 0, 00:21:10.838 "abort_timeout_sec": 1, 00:21:10.838 "ack_timeout": 0, 00:21:10.838 "data_wr_pool_size": 0 00:21:10.838 } 00:21:10.838 }, 00:21:10.838 { 00:21:10.838 "method": "nvmf_create_subsystem", 00:21:10.838 "params": { 00:21:10.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.838 "allow_any_host": false, 00:21:10.838 "serial_number": "SPDK00000000000001", 00:21:10.838 "model_number": "SPDK bdev Controller", 00:21:10.838 "max_namespaces": 10, 00:21:10.838 "min_cntlid": 1, 00:21:10.838 "max_cntlid": 65519, 00:21:10.838 "ana_reporting": false 00:21:10.838 } 00:21:10.838 }, 00:21:10.838 { 00:21:10.838 "method": "nvmf_subsystem_add_host", 00:21:10.838 "params": { 00:21:10.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.838 "host": "nqn.2016-06.io.spdk:host1", 00:21:10.838 "psk": "/tmp/tmp.trlwBjPZ8X" 00:21:10.838 } 00:21:10.838 }, 00:21:10.838 { 00:21:10.838 "method": "nvmf_subsystem_add_ns", 00:21:10.838 "params": { 00:21:10.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.838 "namespace": { 00:21:10.838 "nsid": 1, 00:21:10.838 "bdev_name": "malloc0", 00:21:10.838 "nguid": "B9A0F0E0EC0F4CEDAEFA32294C34363E", 00:21:10.838 "uuid": "b9a0f0e0-ec0f-4ced-aefa-32294c34363e", 00:21:10.838 "no_auto_visible": false 00:21:10.838 } 00:21:10.838 } 00:21:10.838 }, 00:21:10.838 { 00:21:10.838 "method": "nvmf_subsystem_add_listener", 00:21:10.838 "params": { 00:21:10.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.838 "listen_address": { 00:21:10.838 "trtype": "TCP", 00:21:10.838 "adrfam": "IPv4", 00:21:10.838 "traddr": "10.0.0.2", 00:21:10.838 "trsvcid": "4420" 00:21:10.838 }, 00:21:10.838 "secure_channel": true 00:21:10.838 } 00:21:10.838 } 00:21:10.838 ] 00:21:10.838 } 00:21:10.838 ] 00:21:10.838 }' 00:21:10.838 20:16:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1028501 00:21:10.838 20:16:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1028501 00:21:10.838 20:16:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:10.838 20:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1028501 ']' 00:21:10.838 20:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.838 20:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:10.838 20:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.838 20:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:10.838 20:16:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.098 [2024-07-15 20:16:08.301421] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:21:11.099 [2024-07-15 20:16:08.301479] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.099 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.099 [2024-07-15 20:16:08.380645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.099 [2024-07-15 20:16:08.432641] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.099 [2024-07-15 20:16:08.432675] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.099 [2024-07-15 20:16:08.432680] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.099 [2024-07-15 20:16:08.432685] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.099 [2024-07-15 20:16:08.432689] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.099 [2024-07-15 20:16:08.432739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.359 [2024-07-15 20:16:08.615762] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.359 [2024-07-15 20:16:08.631739] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:11.359 [2024-07-15 20:16:08.647797] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:11.359 [2024-07-15 20:16:08.659281] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.928 20:16:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:11.929 20:16:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:11.929 20:16:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:11.929 20:16:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:11.929 20:16:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.929 20:16:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.929 20:16:09 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1028702 00:21:11.929 20:16:09 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1028702 /var/tmp/bdevperf.sock 00:21:11.929 20:16:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1028702 ']' 00:21:11.929 20:16:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.929 20:16:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:11.929 20:16:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.929 20:16:09 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:11.929 20:16:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:11.929 20:16:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.929 20:16:09 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:11.929 "subsystems": [ 00:21:11.929 { 00:21:11.929 "subsystem": "keyring", 00:21:11.929 "config": [] 00:21:11.929 }, 00:21:11.929 { 00:21:11.929 "subsystem": "iobuf", 00:21:11.929 "config": [ 00:21:11.929 { 00:21:11.929 "method": "iobuf_set_options", 00:21:11.929 "params": { 00:21:11.929 "small_pool_count": 8192, 00:21:11.929 "large_pool_count": 1024, 00:21:11.929 "small_bufsize": 8192, 00:21:11.929 "large_bufsize": 135168 00:21:11.929 } 00:21:11.929 } 00:21:11.929 ] 00:21:11.929 }, 00:21:11.929 { 00:21:11.929 "subsystem": "sock", 00:21:11.929 "config": [ 00:21:11.929 { 00:21:11.929 "method": "sock_set_default_impl", 00:21:11.929 "params": { 00:21:11.929 "impl_name": "posix" 00:21:11.929 } 00:21:11.929 }, 00:21:11.929 { 00:21:11.929 "method": "sock_impl_set_options", 00:21:11.929 "params": { 00:21:11.929 "impl_name": "ssl", 00:21:11.929 "recv_buf_size": 4096, 00:21:11.929 "send_buf_size": 4096, 00:21:11.929 "enable_recv_pipe": true, 00:21:11.929 "enable_quickack": false, 00:21:11.929 "enable_placement_id": 0, 00:21:11.929 "enable_zerocopy_send_server": true, 00:21:11.929 "enable_zerocopy_send_client": false, 00:21:11.929 "zerocopy_threshold": 0, 00:21:11.929 "tls_version": 0, 00:21:11.929 "enable_ktls": false 00:21:11.929 } 00:21:11.929 }, 00:21:11.929 { 00:21:11.929 "method": "sock_impl_set_options", 00:21:11.929 "params": { 00:21:11.929 "impl_name": "posix", 00:21:11.929 "recv_buf_size": 2097152, 00:21:11.929 "send_buf_size": 2097152, 00:21:11.929 "enable_recv_pipe": true, 00:21:11.929 "enable_quickack": false, 00:21:11.929 "enable_placement_id": 0, 00:21:11.929 "enable_zerocopy_send_server": true, 00:21:11.929 "enable_zerocopy_send_client": false, 00:21:11.929 "zerocopy_threshold": 0, 00:21:11.929 "tls_version": 0, 00:21:11.929 "enable_ktls": false 00:21:11.929 } 00:21:11.929 } 00:21:11.929 ] 00:21:11.929 }, 00:21:11.929 { 00:21:11.929 "subsystem": "vmd", 00:21:11.929 "config": [] 00:21:11.929 }, 00:21:11.929 { 00:21:11.929 "subsystem": "accel", 00:21:11.929 "config": [ 00:21:11.929 { 00:21:11.929 "method": "accel_set_options", 00:21:11.929 "params": { 00:21:11.929 "small_cache_size": 128, 00:21:11.929 "large_cache_size": 16, 00:21:11.929 "task_count": 2048, 00:21:11.929 "sequence_count": 2048, 00:21:11.929 "buf_count": 2048 00:21:11.929 } 00:21:11.929 } 00:21:11.929 ] 00:21:11.929 }, 00:21:11.929 { 00:21:11.929 "subsystem": "bdev", 00:21:11.929 "config": [ 00:21:11.929 { 00:21:11.929 "method": "bdev_set_options", 00:21:11.929 "params": { 00:21:11.929 "bdev_io_pool_size": 65535, 00:21:11.929 "bdev_io_cache_size": 256, 00:21:11.929 "bdev_auto_examine": true, 00:21:11.929 "iobuf_small_cache_size": 128, 00:21:11.929 "iobuf_large_cache_size": 16 00:21:11.929 } 00:21:11.929 }, 00:21:11.929 { 00:21:11.929 "method": "bdev_raid_set_options", 00:21:11.929 "params": { 00:21:11.929 "process_window_size_kb": 1024 00:21:11.929 } 00:21:11.929 }, 00:21:11.929 { 00:21:11.929 "method": "bdev_iscsi_set_options", 00:21:11.929 "params": { 00:21:11.929 "timeout_sec": 30 00:21:11.929 } 00:21:11.929 }, 00:21:11.929 { 00:21:11.929 "method": "bdev_nvme_set_options", 00:21:11.929 "params": { 00:21:11.929 "action_on_timeout": "none", 00:21:11.929 "timeout_us": 0, 00:21:11.929 "timeout_admin_us": 0, 00:21:11.929 "keep_alive_timeout_ms": 10000, 00:21:11.929 "arbitration_burst": 0, 00:21:11.929 "low_priority_weight": 0, 00:21:11.929 "medium_priority_weight": 0, 00:21:11.929 "high_priority_weight": 0, 00:21:11.929 "nvme_adminq_poll_period_us": 10000, 00:21:11.929 "nvme_ioq_poll_period_us": 0, 00:21:11.929 "io_queue_requests": 512, 00:21:11.929 "delay_cmd_submit": true, 00:21:11.929 "transport_retry_count": 4, 00:21:11.929 "bdev_retry_count": 3, 00:21:11.929 "transport_ack_timeout": 0, 00:21:11.929 "ctrlr_loss_timeout_sec": 0, 00:21:11.929 "reconnect_delay_sec": 0, 00:21:11.929 "fast_io_fail_timeout_sec": 0, 00:21:11.929 "disable_auto_failback": false, 00:21:11.929 "generate_uuids": false, 00:21:11.929 "transport_tos": 0, 00:21:11.929 "nvme_error_stat": false, 00:21:11.929 "rdma_srq_size": 0, 00:21:11.929 "io_path_stat": false, 00:21:11.929 "allow_accel_sequence": false, 00:21:11.929 "rdma_max_cq_size": 0, 00:21:11.929 "rdma_cm_event_timeout_ms": 0, 00:21:11.929 "dhchap_digests": [ 00:21:11.929 "sha256", 00:21:11.929 "sha384", 00:21:11.929 "sha512" 00:21:11.929 ], 00:21:11.929 "dhchap_dhgroups": [ 00:21:11.929 "null", 00:21:11.929 "ffdhe2048", 00:21:11.929 "ffdhe3072", 00:21:11.929 "ffdhe4096", 00:21:11.929 "ffdhe6144", 00:21:11.929 "ffdhe8192" 00:21:11.929 ] 00:21:11.929 } 00:21:11.929 }, 00:21:11.929 { 00:21:11.929 "method": "bdev_nvme_attach_controller", 00:21:11.929 "params": { 00:21:11.929 "name": "TLSTEST", 00:21:11.929 "trtype": "TCP", 00:21:11.929 "adrfam": "IPv4", 00:21:11.929 "traddr": "10.0.0.2", 00:21:11.929 "trsvcid": "4420", 00:21:11.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:11.929 "prchk_reftag": false, 00:21:11.929 "prchk_guard": false, 00:21:11.929 "ctrlr_loss_timeout_sec": 0, 00:21:11.929 "reconnect_delay_sec": 0, 00:21:11.929 "fast_io_fail_timeout_sec": 0, 00:21:11.929 "psk": "/tmp/tmp.trlwBjPZ8X", 00:21:11.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:11.929 "hdgst": false, 00:21:11.929 "ddgst": false 00:21:11.929 } 00:21:11.929 }, 00:21:11.929 { 00:21:11.929 "method": "bdev_nvme_set_hotplug", 00:21:11.929 "params": { 00:21:11.929 "period_us": 100000, 00:21:11.929 "enable": false 00:21:11.929 } 00:21:11.929 }, 00:21:11.929 { 00:21:11.929 "method": "bdev_wait_for_examine" 00:21:11.929 } 00:21:11.929 ] 00:21:11.929 }, 00:21:11.929 { 00:21:11.929 "subsystem": "nbd", 00:21:11.929 "config": [] 00:21:11.929 } 00:21:11.929 ] 00:21:11.929 }' 00:21:11.929 [2024-07-15 20:16:09.163650] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:21:11.929 [2024-07-15 20:16:09.163704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028702 ] 00:21:11.929 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.929 [2024-07-15 20:16:09.214315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.929 [2024-07-15 20:16:09.268498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.191 [2024-07-15 20:16:09.393075] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:12.191 [2024-07-15 20:16:09.393145] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:12.762 20:16:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:12.762 20:16:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:12.762 20:16:09 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:12.762 Running I/O for 10 seconds... 00:21:22.759 00:21:22.759 Latency(us) 00:21:22.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.759 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:22.759 Verification LBA range: start 0x0 length 0x2000 00:21:22.759 TLSTESTn1 : 10.06 2564.55 10.02 0.00 0.00 49774.06 4778.67 108789.76 00:21:22.759 =================================================================================================================== 00:21:22.759 Total : 2564.55 10.02 0.00 0.00 49774.06 4778.67 108789.76 00:21:22.759 0 00:21:22.759 20:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:22.759 20:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1028702 00:21:22.759 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1028702 ']' 00:21:22.759 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1028702 00:21:22.759 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:22.759 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:22.759 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1028702 00:21:22.759 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:22.759 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:22.759 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1028702' 00:21:22.759 killing process with pid 1028702 00:21:22.759 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1028702 00:21:22.759 Received shutdown signal, test time was about 10.000000 seconds 00:21:22.759 00:21:22.759 Latency(us) 00:21:22.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.759 =================================================================================================================== 00:21:22.759 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.759 [2024-07-15 20:16:20.180016] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:22.759 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1028702 00:21:23.019 20:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1028501 00:21:23.019 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1028501 ']' 00:21:23.019 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1028501 00:21:23.019 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:23.019 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.019 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1028501 00:21:23.019 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:23.019 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:23.019 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1028501' 00:21:23.019 killing process with pid 1028501 00:21:23.019 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1028501 00:21:23.019 [2024-07-15 20:16:20.350028] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:23.019 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1028501 00:21:23.279 20:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:23.279 20:16:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:23.279 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:23.279 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.279 20:16:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1030872 00:21:23.279 20:16:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:23.279 20:16:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1030872 00:21:23.279 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1030872 ']' 00:21:23.279 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.279 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.279 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.279 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.279 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.279 [2024-07-15 20:16:20.508770] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:21:23.279 [2024-07-15 20:16:20.508825] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.279 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.279 [2024-07-15 20:16:20.569947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.279 [2024-07-15 20:16:20.632639] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.279 [2024-07-15 20:16:20.632676] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.279 [2024-07-15 20:16:20.632684] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.279 [2024-07-15 20:16:20.632690] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.279 [2024-07-15 20:16:20.632696] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.279 [2024-07-15 20:16:20.632720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.222 20:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.222 20:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:24.222 20:16:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:24.222 20:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:24.222 20:16:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.222 20:16:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.222 20:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.trlwBjPZ8X 00:21:24.222 20:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.trlwBjPZ8X 00:21:24.222 20:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:24.222 [2024-07-15 20:16:21.479632] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.222 20:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:24.222 20:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:24.483 [2024-07-15 20:16:21.772355] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:24.483 [2024-07-15 20:16:21.772538] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.483 20:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:24.743 malloc0 00:21:24.743 20:16:21 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:24.743 20:16:22 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.trlwBjPZ8X 00:21:25.004 [2024-07-15 20:16:22.224327] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:25.004 20:16:22 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1031237 00:21:25.004 20:16:22 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:25.004 20:16:22 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:25.004 20:16:22 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1031237 /var/tmp/bdevperf.sock 00:21:25.004 20:16:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1031237 ']' 00:21:25.004 20:16:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:25.004 20:16:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:25.004 20:16:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:25.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:25.004 20:16:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:25.004 20:16:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.004 [2024-07-15 20:16:22.295943] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:21:25.004 [2024-07-15 20:16:22.296039] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031237 ] 00:21:25.004 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.004 [2024-07-15 20:16:22.371679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.004 [2024-07-15 20:16:22.425285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.945 20:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:25.945 20:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:25.945 20:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.trlwBjPZ8X 00:21:25.945 20:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:25.945 [2024-07-15 20:16:23.327277] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:26.206 nvme0n1 00:21:26.206 20:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:26.206 Running I/O for 1 seconds... 00:21:27.148 00:21:27.148 Latency(us) 00:21:27.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.148 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:27.148 Verification LBA range: start 0x0 length 0x2000 00:21:27.148 nvme0n1 : 1.04 2431.41 9.50 0.00 0.00 51837.05 4724.05 115343.36 00:21:27.148 =================================================================================================================== 00:21:27.148 Total : 2431.41 9.50 0.00 0.00 51837.05 4724.05 115343.36 00:21:27.148 0 00:21:27.148 20:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1031237 00:21:27.148 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1031237 ']' 00:21:27.148 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1031237 00:21:27.148 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:27.409 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:27.409 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1031237 00:21:27.409 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:27.409 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:27.409 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1031237' 00:21:27.409 killing process with pid 1031237 00:21:27.409 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1031237 00:21:27.409 Received shutdown signal, test time was about 1.000000 seconds 00:21:27.409 00:21:27.409 Latency(us) 00:21:27.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.409 =================================================================================================================== 00:21:27.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:27.409 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1031237 00:21:27.409 20:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1030872 00:21:27.409 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1030872 ']' 00:21:27.409 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1030872 00:21:27.409 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:27.409 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:27.409 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1030872 00:21:27.409 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:27.409 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:27.409 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1030872' 00:21:27.409 killing process with pid 1030872 00:21:27.409 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1030872 00:21:27.409 [2024-07-15 20:16:24.799736] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:27.409 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1030872 00:21:27.667 20:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:21:27.668 20:16:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:27.668 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:27.668 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.668 20:16:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1031900 00:21:27.668 20:16:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1031900 00:21:27.668 20:16:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:27.668 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1031900 ']' 00:21:27.668 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.668 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.668 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.668 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.668 20:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.668 [2024-07-15 20:16:24.998075] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:21:27.668 [2024-07-15 20:16:24.998136] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.668 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.668 [2024-07-15 20:16:25.062063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.927 [2024-07-15 20:16:25.126374] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.927 [2024-07-15 20:16:25.126409] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.927 [2024-07-15 20:16:25.126416] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.927 [2024-07-15 20:16:25.126423] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.927 [2024-07-15 20:16:25.126428] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.927 [2024-07-15 20:16:25.126445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.497 [2024-07-15 20:16:25.804603] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.497 malloc0 00:21:28.497 [2024-07-15 20:16:25.831292] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:28.497 [2024-07-15 20:16:25.831472] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1031943 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1031943 /var/tmp/bdevperf.sock 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1031943 ']' 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:28.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:28.497 20:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.497 [2024-07-15 20:16:25.907792] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:21:28.497 [2024-07-15 20:16:25.907837] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031943 ] 00:21:28.759 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.759 [2024-07-15 20:16:25.981176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.759 [2024-07-15 20:16:26.034778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.329 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.329 20:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:29.329 20:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.trlwBjPZ8X 00:21:29.589 20:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:29.589 [2024-07-15 20:16:26.960576] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:29.849 nvme0n1 00:21:29.849 20:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:29.849 Running I/O for 1 seconds... 00:21:31.257 00:21:31.257 Latency(us) 00:21:31.257 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.257 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:31.257 Verification LBA range: start 0x0 length 0x2000 00:21:31.257 nvme0n1 : 1.08 1824.70 7.13 0.00 0.00 67903.83 6144.00 126702.93 00:21:31.257 =================================================================================================================== 00:21:31.257 Total : 1824.70 7.13 0.00 0.00 67903.83 6144.00 126702.93 00:21:31.257 0 00:21:31.257 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:21:31.257 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.257 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.257 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.257 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:21:31.257 "subsystems": [ 00:21:31.257 { 00:21:31.257 "subsystem": "keyring", 00:21:31.257 "config": [ 00:21:31.257 { 00:21:31.257 "method": "keyring_file_add_key", 00:21:31.257 "params": { 00:21:31.257 "name": "key0", 00:21:31.257 "path": "/tmp/tmp.trlwBjPZ8X" 00:21:31.257 } 00:21:31.257 } 00:21:31.257 ] 00:21:31.257 }, 00:21:31.257 { 00:21:31.257 "subsystem": "iobuf", 00:21:31.257 "config": [ 00:21:31.257 { 00:21:31.257 "method": "iobuf_set_options", 00:21:31.257 "params": { 00:21:31.257 "small_pool_count": 8192, 00:21:31.257 "large_pool_count": 1024, 00:21:31.257 "small_bufsize": 8192, 00:21:31.257 "large_bufsize": 135168 00:21:31.257 } 00:21:31.257 } 00:21:31.257 ] 00:21:31.257 }, 00:21:31.257 { 00:21:31.257 "subsystem": "sock", 00:21:31.257 "config": [ 00:21:31.257 { 00:21:31.257 "method": "sock_set_default_impl", 00:21:31.257 "params": { 00:21:31.257 "impl_name": "posix" 00:21:31.257 } 00:21:31.257 }, 00:21:31.257 { 00:21:31.257 "method": "sock_impl_set_options", 00:21:31.257 "params": { 00:21:31.257 "impl_name": "ssl", 00:21:31.257 "recv_buf_size": 4096, 00:21:31.257 "send_buf_size": 4096, 00:21:31.257 "enable_recv_pipe": true, 00:21:31.257 "enable_quickack": false, 00:21:31.257 "enable_placement_id": 0, 00:21:31.257 "enable_zerocopy_send_server": true, 00:21:31.257 "enable_zerocopy_send_client": false, 00:21:31.257 "zerocopy_threshold": 0, 00:21:31.257 "tls_version": 0, 00:21:31.257 "enable_ktls": false 00:21:31.257 } 00:21:31.257 }, 00:21:31.257 { 00:21:31.257 "method": "sock_impl_set_options", 00:21:31.257 "params": { 00:21:31.257 "impl_name": "posix", 00:21:31.257 "recv_buf_size": 2097152, 00:21:31.257 "send_buf_size": 2097152, 00:21:31.257 "enable_recv_pipe": true, 00:21:31.257 "enable_quickack": false, 00:21:31.257 "enable_placement_id": 0, 00:21:31.257 "enable_zerocopy_send_server": true, 00:21:31.257 "enable_zerocopy_send_client": false, 00:21:31.257 "zerocopy_threshold": 0, 00:21:31.257 "tls_version": 0, 00:21:31.257 "enable_ktls": false 00:21:31.257 } 00:21:31.257 } 00:21:31.257 ] 00:21:31.257 }, 00:21:31.257 { 00:21:31.257 "subsystem": "vmd", 00:21:31.257 "config": [] 00:21:31.257 }, 00:21:31.257 { 00:21:31.257 "subsystem": "accel", 00:21:31.257 "config": [ 00:21:31.257 { 00:21:31.257 "method": "accel_set_options", 00:21:31.257 "params": { 00:21:31.257 "small_cache_size": 128, 00:21:31.257 "large_cache_size": 16, 00:21:31.257 "task_count": 2048, 00:21:31.257 "sequence_count": 2048, 00:21:31.257 "buf_count": 2048 00:21:31.257 } 00:21:31.257 } 00:21:31.257 ] 00:21:31.257 }, 00:21:31.257 { 00:21:31.257 "subsystem": "bdev", 00:21:31.257 "config": [ 00:21:31.257 { 00:21:31.257 "method": "bdev_set_options", 00:21:31.257 "params": { 00:21:31.257 "bdev_io_pool_size": 65535, 00:21:31.257 "bdev_io_cache_size": 256, 00:21:31.257 "bdev_auto_examine": true, 00:21:31.257 "iobuf_small_cache_size": 128, 00:21:31.257 "iobuf_large_cache_size": 16 00:21:31.257 } 00:21:31.257 }, 00:21:31.257 { 00:21:31.257 "method": "bdev_raid_set_options", 00:21:31.257 "params": { 00:21:31.257 "process_window_size_kb": 1024 00:21:31.257 } 00:21:31.257 }, 00:21:31.257 { 00:21:31.257 "method": "bdev_iscsi_set_options", 00:21:31.257 "params": { 00:21:31.257 "timeout_sec": 30 00:21:31.257 } 00:21:31.257 }, 00:21:31.257 { 00:21:31.257 "method": "bdev_nvme_set_options", 00:21:31.257 "params": { 00:21:31.257 "action_on_timeout": "none", 00:21:31.257 "timeout_us": 0, 00:21:31.257 "timeout_admin_us": 0, 00:21:31.257 "keep_alive_timeout_ms": 10000, 00:21:31.257 "arbitration_burst": 0, 00:21:31.257 "low_priority_weight": 0, 00:21:31.257 "medium_priority_weight": 0, 00:21:31.257 "high_priority_weight": 0, 00:21:31.257 "nvme_adminq_poll_period_us": 10000, 00:21:31.257 "nvme_ioq_poll_period_us": 0, 00:21:31.257 "io_queue_requests": 0, 00:21:31.257 "delay_cmd_submit": true, 00:21:31.257 "transport_retry_count": 4, 00:21:31.257 "bdev_retry_count": 3, 00:21:31.257 "transport_ack_timeout": 0, 00:21:31.257 "ctrlr_loss_timeout_sec": 0, 00:21:31.257 "reconnect_delay_sec": 0, 00:21:31.257 "fast_io_fail_timeout_sec": 0, 00:21:31.257 "disable_auto_failback": false, 00:21:31.257 "generate_uuids": false, 00:21:31.257 "transport_tos": 0, 00:21:31.257 "nvme_error_stat": false, 00:21:31.257 "rdma_srq_size": 0, 00:21:31.257 "io_path_stat": false, 00:21:31.257 "allow_accel_sequence": false, 00:21:31.257 "rdma_max_cq_size": 0, 00:21:31.257 "rdma_cm_event_timeout_ms": 0, 00:21:31.257 "dhchap_digests": [ 00:21:31.257 "sha256", 00:21:31.257 "sha384", 00:21:31.257 "sha512" 00:21:31.257 ], 00:21:31.257 "dhchap_dhgroups": [ 00:21:31.257 "null", 00:21:31.257 "ffdhe2048", 00:21:31.257 "ffdhe3072", 00:21:31.257 "ffdhe4096", 00:21:31.257 "ffdhe6144", 00:21:31.257 "ffdhe8192" 00:21:31.257 ] 00:21:31.257 } 00:21:31.257 }, 00:21:31.257 { 00:21:31.257 "method": "bdev_nvme_set_hotplug", 00:21:31.257 "params": { 00:21:31.257 "period_us": 100000, 00:21:31.257 "enable": false 00:21:31.257 } 00:21:31.257 }, 00:21:31.257 { 00:21:31.257 "method": "bdev_malloc_create", 00:21:31.257 "params": { 00:21:31.257 "name": "malloc0", 00:21:31.257 "num_blocks": 8192, 00:21:31.257 "block_size": 4096, 00:21:31.257 "physical_block_size": 4096, 00:21:31.257 "uuid": "94771f06-e489-4437-ad65-2a4334cef2a3", 00:21:31.257 "optimal_io_boundary": 0 00:21:31.257 } 00:21:31.257 }, 00:21:31.257 { 00:21:31.257 "method": "bdev_wait_for_examine" 00:21:31.257 } 00:21:31.257 ] 00:21:31.257 }, 00:21:31.257 { 00:21:31.257 "subsystem": "nbd", 00:21:31.257 "config": [] 00:21:31.257 }, 00:21:31.257 { 00:21:31.257 "subsystem": "scheduler", 00:21:31.257 "config": [ 00:21:31.257 { 00:21:31.257 "method": "framework_set_scheduler", 00:21:31.257 "params": { 00:21:31.257 "name": "static" 00:21:31.257 } 00:21:31.257 } 00:21:31.257 ] 00:21:31.257 }, 00:21:31.257 { 00:21:31.257 "subsystem": "nvmf", 00:21:31.257 "config": [ 00:21:31.257 { 00:21:31.257 "method": "nvmf_set_config", 00:21:31.257 "params": { 00:21:31.257 "discovery_filter": "match_any", 00:21:31.257 "admin_cmd_passthru": { 00:21:31.257 "identify_ctrlr": false 00:21:31.257 } 00:21:31.257 } 00:21:31.257 }, 00:21:31.257 { 00:21:31.257 "method": "nvmf_set_max_subsystems", 00:21:31.257 "params": { 00:21:31.257 "max_subsystems": 1024 00:21:31.257 } 00:21:31.257 }, 00:21:31.257 { 00:21:31.257 "method": "nvmf_set_crdt", 00:21:31.258 "params": { 00:21:31.258 "crdt1": 0, 00:21:31.258 "crdt2": 0, 00:21:31.258 "crdt3": 0 00:21:31.258 } 00:21:31.258 }, 00:21:31.258 { 00:21:31.258 "method": "nvmf_create_transport", 00:21:31.258 "params": { 00:21:31.258 "trtype": "TCP", 00:21:31.258 "max_queue_depth": 128, 00:21:31.258 "max_io_qpairs_per_ctrlr": 127, 00:21:31.258 "in_capsule_data_size": 4096, 00:21:31.258 "max_io_size": 131072, 00:21:31.258 "io_unit_size": 131072, 00:21:31.258 "max_aq_depth": 128, 00:21:31.258 "num_shared_buffers": 511, 00:21:31.258 "buf_cache_size": 4294967295, 00:21:31.258 "dif_insert_or_strip": false, 00:21:31.258 "zcopy": false, 00:21:31.258 "c2h_success": false, 00:21:31.258 "sock_priority": 0, 00:21:31.258 "abort_timeout_sec": 1, 00:21:31.258 "ack_timeout": 0, 00:21:31.258 "data_wr_pool_size": 0 00:21:31.258 } 00:21:31.258 }, 00:21:31.258 { 00:21:31.258 "method": "nvmf_create_subsystem", 00:21:31.258 "params": { 00:21:31.258 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.258 "allow_any_host": false, 00:21:31.258 "serial_number": "00000000000000000000", 00:21:31.258 "model_number": "SPDK bdev Controller", 00:21:31.258 "max_namespaces": 32, 00:21:31.258 "min_cntlid": 1, 00:21:31.258 "max_cntlid": 65519, 00:21:31.258 "ana_reporting": false 00:21:31.258 } 00:21:31.258 }, 00:21:31.258 { 00:21:31.258 "method": "nvmf_subsystem_add_host", 00:21:31.258 "params": { 00:21:31.258 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.258 "host": "nqn.2016-06.io.spdk:host1", 00:21:31.258 "psk": "key0" 00:21:31.258 } 00:21:31.258 }, 00:21:31.258 { 00:21:31.258 "method": "nvmf_subsystem_add_ns", 00:21:31.258 "params": { 00:21:31.258 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.258 "namespace": { 00:21:31.258 "nsid": 1, 00:21:31.258 "bdev_name": "malloc0", 00:21:31.258 "nguid": "94771F06E4894437AD652A4334CEF2A3", 00:21:31.258 "uuid": "94771f06-e489-4437-ad65-2a4334cef2a3", 00:21:31.258 "no_auto_visible": false 00:21:31.258 } 00:21:31.258 } 00:21:31.258 }, 00:21:31.258 { 00:21:31.258 "method": "nvmf_subsystem_add_listener", 00:21:31.258 "params": { 00:21:31.258 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.258 "listen_address": { 00:21:31.258 "trtype": "TCP", 00:21:31.258 "adrfam": "IPv4", 00:21:31.258 "traddr": "10.0.0.2", 00:21:31.258 "trsvcid": "4420" 00:21:31.258 }, 00:21:31.258 "secure_channel": false, 00:21:31.258 "sock_impl": "ssl" 00:21:31.258 } 00:21:31.258 } 00:21:31.258 ] 00:21:31.258 } 00:21:31.258 ] 00:21:31.258 }' 00:21:31.258 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:31.258 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:21:31.258 "subsystems": [ 00:21:31.258 { 00:21:31.258 "subsystem": "keyring", 00:21:31.258 "config": [ 00:21:31.258 { 00:21:31.258 "method": "keyring_file_add_key", 00:21:31.258 "params": { 00:21:31.258 "name": "key0", 00:21:31.258 "path": "/tmp/tmp.trlwBjPZ8X" 00:21:31.258 } 00:21:31.258 } 00:21:31.258 ] 00:21:31.258 }, 00:21:31.258 { 00:21:31.258 "subsystem": "iobuf", 00:21:31.258 "config": [ 00:21:31.258 { 00:21:31.258 "method": "iobuf_set_options", 00:21:31.258 "params": { 00:21:31.258 "small_pool_count": 8192, 00:21:31.258 "large_pool_count": 1024, 00:21:31.258 "small_bufsize": 8192, 00:21:31.258 "large_bufsize": 135168 00:21:31.258 } 00:21:31.258 } 00:21:31.258 ] 00:21:31.258 }, 00:21:31.258 { 00:21:31.258 "subsystem": "sock", 00:21:31.258 "config": [ 00:21:31.258 { 00:21:31.258 "method": "sock_set_default_impl", 00:21:31.258 "params": { 00:21:31.258 "impl_name": "posix" 00:21:31.258 } 00:21:31.258 }, 00:21:31.258 { 00:21:31.258 "method": "sock_impl_set_options", 00:21:31.258 "params": { 00:21:31.258 "impl_name": "ssl", 00:21:31.258 "recv_buf_size": 4096, 00:21:31.258 "send_buf_size": 4096, 00:21:31.258 "enable_recv_pipe": true, 00:21:31.258 "enable_quickack": false, 00:21:31.258 "enable_placement_id": 0, 00:21:31.258 "enable_zerocopy_send_server": true, 00:21:31.258 "enable_zerocopy_send_client": false, 00:21:31.258 "zerocopy_threshold": 0, 00:21:31.258 "tls_version": 0, 00:21:31.258 "enable_ktls": false 00:21:31.258 } 00:21:31.258 }, 00:21:31.258 { 00:21:31.258 "method": "sock_impl_set_options", 00:21:31.258 "params": { 00:21:31.258 "impl_name": "posix", 00:21:31.258 "recv_buf_size": 2097152, 00:21:31.258 "send_buf_size": 2097152, 00:21:31.258 "enable_recv_pipe": true, 00:21:31.258 "enable_quickack": false, 00:21:31.258 "enable_placement_id": 0, 00:21:31.258 "enable_zerocopy_send_server": true, 00:21:31.258 "enable_zerocopy_send_client": false, 00:21:31.258 "zerocopy_threshold": 0, 00:21:31.258 "tls_version": 0, 00:21:31.258 "enable_ktls": false 00:21:31.258 } 00:21:31.258 } 00:21:31.258 ] 00:21:31.258 }, 00:21:31.258 { 00:21:31.258 "subsystem": "vmd", 00:21:31.258 "config": [] 00:21:31.258 }, 00:21:31.258 { 00:21:31.258 "subsystem": "accel", 00:21:31.258 "config": [ 00:21:31.258 { 00:21:31.258 "method": "accel_set_options", 00:21:31.258 "params": { 00:21:31.258 "small_cache_size": 128, 00:21:31.258 "large_cache_size": 16, 00:21:31.258 "task_count": 2048, 00:21:31.258 "sequence_count": 2048, 00:21:31.258 "buf_count": 2048 00:21:31.258 } 00:21:31.258 } 00:21:31.258 ] 00:21:31.258 }, 00:21:31.258 { 00:21:31.258 "subsystem": "bdev", 00:21:31.258 "config": [ 00:21:31.258 { 00:21:31.258 "method": "bdev_set_options", 00:21:31.258 "params": { 00:21:31.258 "bdev_io_pool_size": 65535, 00:21:31.258 "bdev_io_cache_size": 256, 00:21:31.258 "bdev_auto_examine": true, 00:21:31.258 "iobuf_small_cache_size": 128, 00:21:31.258 "iobuf_large_cache_size": 16 00:21:31.258 } 00:21:31.258 }, 00:21:31.258 { 00:21:31.258 "method": "bdev_raid_set_options", 00:21:31.258 "params": { 00:21:31.258 "process_window_size_kb": 1024 00:21:31.258 } 00:21:31.258 }, 00:21:31.258 { 00:21:31.258 "method": "bdev_iscsi_set_options", 00:21:31.258 "params": { 00:21:31.258 "timeout_sec": 30 00:21:31.258 } 00:21:31.258 }, 00:21:31.258 { 00:21:31.258 "method": "bdev_nvme_set_options", 00:21:31.258 "params": { 00:21:31.258 "action_on_timeout": "none", 00:21:31.258 "timeout_us": 0, 00:21:31.258 "timeout_admin_us": 0, 00:21:31.258 "keep_alive_timeout_ms": 10000, 00:21:31.258 "arbitration_burst": 0, 00:21:31.258 "low_priority_weight": 0, 00:21:31.258 "medium_priority_weight": 0, 00:21:31.258 "high_priority_weight": 0, 00:21:31.258 "nvme_adminq_poll_period_us": 10000, 00:21:31.258 "nvme_ioq_poll_period_us": 0, 00:21:31.258 "io_queue_requests": 512, 00:21:31.258 "delay_cmd_submit": true, 00:21:31.258 "transport_retry_count": 4, 00:21:31.258 "bdev_retry_count": 3, 00:21:31.258 "transport_ack_timeout": 0, 00:21:31.258 "ctrlr_loss_timeout_sec": 0, 00:21:31.258 "reconnect_delay_sec": 0, 00:21:31.258 "fast_io_fail_timeout_sec": 0, 00:21:31.258 "disable_auto_failback": false, 00:21:31.258 "generate_uuids": false, 00:21:31.258 "transport_tos": 0, 00:21:31.258 "nvme_error_stat": false, 00:21:31.258 "rdma_srq_size": 0, 00:21:31.258 "io_path_stat": false, 00:21:31.258 "allow_accel_sequence": false, 00:21:31.258 "rdma_max_cq_size": 0, 00:21:31.258 "rdma_cm_event_timeout_ms": 0, 00:21:31.258 "dhchap_digests": [ 00:21:31.258 "sha256", 00:21:31.258 "sha384", 00:21:31.258 "sha512" 00:21:31.258 ], 00:21:31.258 "dhchap_dhgroups": [ 00:21:31.258 "null", 00:21:31.258 "ffdhe2048", 00:21:31.258 "ffdhe3072", 00:21:31.258 "ffdhe4096", 00:21:31.258 "ffdhe6144", 00:21:31.258 "ffdhe8192" 00:21:31.258 ] 00:21:31.258 } 00:21:31.258 }, 00:21:31.258 { 00:21:31.258 "method": "bdev_nvme_attach_controller", 00:21:31.258 "params": { 00:21:31.258 "name": "nvme0", 00:21:31.258 "trtype": "TCP", 00:21:31.258 "adrfam": "IPv4", 00:21:31.258 "traddr": "10.0.0.2", 00:21:31.258 "trsvcid": "4420", 00:21:31.258 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.258 "prchk_reftag": false, 00:21:31.258 "prchk_guard": false, 00:21:31.258 "ctrlr_loss_timeout_sec": 0, 00:21:31.258 "reconnect_delay_sec": 0, 00:21:31.258 "fast_io_fail_timeout_sec": 0, 00:21:31.258 "psk": "key0", 00:21:31.259 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:31.259 "hdgst": false, 00:21:31.259 "ddgst": false 00:21:31.259 } 00:21:31.259 }, 00:21:31.259 { 00:21:31.259 "method": "bdev_nvme_set_hotplug", 00:21:31.259 "params": { 00:21:31.259 "period_us": 100000, 00:21:31.259 "enable": false 00:21:31.259 } 00:21:31.259 }, 00:21:31.259 { 00:21:31.259 "method": "bdev_enable_histogram", 00:21:31.259 "params": { 00:21:31.259 "name": "nvme0n1", 00:21:31.259 "enable": true 00:21:31.259 } 00:21:31.259 }, 00:21:31.259 { 00:21:31.259 "method": "bdev_wait_for_examine" 00:21:31.259 } 00:21:31.259 ] 00:21:31.259 }, 00:21:31.259 { 00:21:31.259 "subsystem": "nbd", 00:21:31.259 "config": [] 00:21:31.259 } 00:21:31.259 ] 00:21:31.259 }' 00:21:31.259 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 1031943 00:21:31.259 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1031943 ']' 00:21:31.259 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1031943 00:21:31.259 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:31.259 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.259 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1031943 00:21:31.259 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:31.259 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:31.259 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1031943' 00:21:31.259 killing process with pid 1031943 00:21:31.259 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1031943 00:21:31.259 Received shutdown signal, test time was about 1.000000 seconds 00:21:31.259 00:21:31.259 Latency(us) 00:21:31.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.259 =================================================================================================================== 00:21:31.259 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:31.259 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1031943 00:21:31.518 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 1031900 00:21:31.518 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1031900 ']' 00:21:31.518 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1031900 00:21:31.518 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:31.518 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.518 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1031900 00:21:31.518 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:31.518 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:31.518 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1031900' 00:21:31.518 killing process with pid 1031900 00:21:31.518 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1031900 00:21:31.518 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1031900 00:21:31.778 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:21:31.778 20:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:31.778 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:31.778 20:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:21:31.778 "subsystems": [ 00:21:31.778 { 00:21:31.778 "subsystem": "keyring", 00:21:31.778 "config": [ 00:21:31.778 { 00:21:31.778 "method": "keyring_file_add_key", 00:21:31.778 "params": { 00:21:31.778 "name": "key0", 00:21:31.778 "path": "/tmp/tmp.trlwBjPZ8X" 00:21:31.778 } 00:21:31.778 } 00:21:31.778 ] 00:21:31.778 }, 00:21:31.778 { 00:21:31.778 "subsystem": "iobuf", 00:21:31.778 "config": [ 00:21:31.778 { 00:21:31.778 "method": "iobuf_set_options", 00:21:31.778 "params": { 00:21:31.778 "small_pool_count": 8192, 00:21:31.778 "large_pool_count": 1024, 00:21:31.778 "small_bufsize": 8192, 00:21:31.778 "large_bufsize": 135168 00:21:31.779 } 00:21:31.779 } 00:21:31.779 ] 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "subsystem": "sock", 00:21:31.779 "config": [ 00:21:31.779 { 00:21:31.779 "method": "sock_set_default_impl", 00:21:31.779 "params": { 00:21:31.779 "impl_name": "posix" 00:21:31.779 } 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "method": "sock_impl_set_options", 00:21:31.779 "params": { 00:21:31.779 "impl_name": "ssl", 00:21:31.779 "recv_buf_size": 4096, 00:21:31.779 "send_buf_size": 4096, 00:21:31.779 "enable_recv_pipe": true, 00:21:31.779 "enable_quickack": false, 00:21:31.779 "enable_placement_id": 0, 00:21:31.779 "enable_zerocopy_send_server": true, 00:21:31.779 "enable_zerocopy_send_client": false, 00:21:31.779 "zerocopy_threshold": 0, 00:21:31.779 "tls_version": 0, 00:21:31.779 "enable_ktls": false 00:21:31.779 } 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "method": "sock_impl_set_options", 00:21:31.779 "params": { 00:21:31.779 "impl_name": "posix", 00:21:31.779 "recv_buf_size": 2097152, 00:21:31.779 "send_buf_size": 2097152, 00:21:31.779 "enable_recv_pipe": true, 00:21:31.779 "enable_quickack": false, 00:21:31.779 "enable_placement_id": 0, 00:21:31.779 "enable_zerocopy_send_server": true, 00:21:31.779 "enable_zerocopy_send_client": false, 00:21:31.779 "zerocopy_threshold": 0, 00:21:31.779 "tls_version": 0, 00:21:31.779 "enable_ktls": false 00:21:31.779 } 00:21:31.779 } 00:21:31.779 ] 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "subsystem": "vmd", 00:21:31.779 "config": [] 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "subsystem": "accel", 00:21:31.779 "config": [ 00:21:31.779 { 00:21:31.779 "method": "accel_set_options", 00:21:31.779 "params": { 00:21:31.779 "small_cache_size": 128, 00:21:31.779 "large_cache_size": 16, 00:21:31.779 "task_count": 2048, 00:21:31.779 "sequence_count": 2048, 00:21:31.779 "buf_count": 2048 00:21:31.779 } 00:21:31.779 } 00:21:31.779 ] 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "subsystem": "bdev", 00:21:31.779 "config": [ 00:21:31.779 { 00:21:31.779 "method": "bdev_set_options", 00:21:31.779 "params": { 00:21:31.779 "bdev_io_pool_size": 65535, 00:21:31.779 "bdev_io_cache_size": 256, 00:21:31.779 "bdev_auto_examine": true, 00:21:31.779 "iobuf_small_cache_size": 128, 00:21:31.779 "iobuf_large_cache_size": 16 00:21:31.779 } 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "method": "bdev_raid_set_options", 00:21:31.779 "params": { 00:21:31.779 "process_window_size_kb": 1024 00:21:31.779 } 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "method": "bdev_iscsi_set_options", 00:21:31.779 "params": { 00:21:31.779 "timeout_sec": 30 00:21:31.779 } 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "method": "bdev_nvme_set_options", 00:21:31.779 "params": { 00:21:31.779 "action_on_timeout": "none", 00:21:31.779 "timeout_us": 0, 00:21:31.779 "timeout_admin_us": 0, 00:21:31.779 "keep_alive_timeout_ms": 10000, 00:21:31.779 "arbitration_burst": 0, 00:21:31.779 "low_priority_weight": 0, 00:21:31.779 "medium_priority_weight": 0, 00:21:31.779 "high_priority_weight": 0, 00:21:31.779 "nvme_adminq_poll_period_us": 10000, 00:21:31.779 "nvme_ioq_poll_period_us": 0, 00:21:31.779 "io_queue_requests": 0, 00:21:31.779 "delay_cmd_submit": true, 00:21:31.779 "transport_retry_count": 4, 00:21:31.779 "bdev_retry_count": 3, 00:21:31.779 "transport_ack_timeout": 0, 00:21:31.779 "ctrlr_loss_timeout_sec": 0, 00:21:31.779 "reconnect_delay_sec": 0, 00:21:31.779 "fast_io_fail_timeout_sec": 0, 00:21:31.779 "disable_auto_failback": false, 00:21:31.779 "generate_uuids": false, 00:21:31.779 "transport_tos": 0, 00:21:31.779 "nvme_error_stat": false, 00:21:31.779 "rdma_srq_size": 0, 00:21:31.779 "io_path_stat": false, 00:21:31.779 "allow_accel_sequence": false, 00:21:31.779 "rdma_max_cq_size": 0, 00:21:31.779 "rdma_cm_event_timeout_ms": 0, 00:21:31.779 "dhchap_digests": [ 00:21:31.779 "sha256", 00:21:31.779 "sha384", 00:21:31.779 "sha512" 00:21:31.779 ], 00:21:31.779 "dhchap_dhgroups": [ 00:21:31.779 "null", 00:21:31.779 "ffdhe2048", 00:21:31.779 "ffdhe3072", 00:21:31.779 "ffdhe4096", 00:21:31.779 "ffdhe6144", 00:21:31.779 "ffdhe8192" 00:21:31.779 ] 00:21:31.779 } 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "method": "bdev_nvme_set_hotplug", 00:21:31.779 "params": { 00:21:31.779 "period_us": 100000, 00:21:31.779 "enable": false 00:21:31.779 } 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "method": "bdev_malloc_create", 00:21:31.779 "params": { 00:21:31.779 "name": "malloc0", 00:21:31.779 "num_blocks": 8192, 00:21:31.779 "block_size": 4096, 00:21:31.779 "physical_block_size": 4096, 00:21:31.779 "uuid": "94771f06-e489-4437-ad65-2a4334cef2a3", 00:21:31.779 "optimal_io_boundary": 0 00:21:31.779 } 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "method": "bdev_wait_for_examine" 00:21:31.779 } 00:21:31.779 ] 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "subsystem": "nbd", 00:21:31.779 "config": [] 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "subsystem": "scheduler", 00:21:31.779 "config": [ 00:21:31.779 { 00:21:31.779 "method": "framework_set_scheduler", 00:21:31.779 "params": { 00:21:31.779 "name": "static" 00:21:31.779 } 00:21:31.779 } 00:21:31.779 ] 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "subsystem": "nvmf", 00:21:31.779 "config": [ 00:21:31.779 { 00:21:31.779 "method": "nvmf_set_config", 00:21:31.779 "params": { 00:21:31.779 "discovery_filter": "match_any", 00:21:31.779 "admin_cmd_passthru": { 00:21:31.779 "identify_ctrlr": false 00:21:31.779 } 00:21:31.779 } 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "method": "nvmf_set_max_subsystems", 00:21:31.779 "params": { 00:21:31.779 "max_subsystems": 1024 00:21:31.779 } 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "method": "nvmf_set_crdt", 00:21:31.779 "params": { 00:21:31.779 "crdt1": 0, 00:21:31.779 "crdt2": 0, 00:21:31.779 "crdt3": 0 00:21:31.779 } 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "method": "nvmf_create_transport", 00:21:31.779 "params": { 00:21:31.779 "trtype": "TCP", 00:21:31.779 "max_queue_depth": 128, 00:21:31.779 "max_io_qpairs_per_ctrlr": 127, 00:21:31.779 "in_capsule_data_size": 4096, 00:21:31.779 "max_io_size": 131072, 00:21:31.779 "io_unit_size": 131072, 00:21:31.779 "max_aq_depth": 128, 00:21:31.779 "num_shared_buffers": 511, 00:21:31.779 "buf_cache_size": 4294967295, 00:21:31.779 "dif_insert_or_strip": false, 00:21:31.779 "zcopy": false, 00:21:31.779 "c2h_success": false, 00:21:31.779 "sock_priority": 0, 00:21:31.779 "abort_timeout_sec": 1, 00:21:31.779 "ack_timeout": 0, 00:21:31.779 "data_wr_pool_size": 0 00:21:31.779 } 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "method": "nvmf_create_subsystem", 00:21:31.779 "params": { 00:21:31.779 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.779 "allow_any_host": false, 00:21:31.779 "serial_number": "00000000000000000000", 00:21:31.779 "model_number": "SPDK bdev Controller", 00:21:31.779 "max_namespaces": 32, 00:21:31.779 "min_cntlid": 1, 00:21:31.779 "max_cntlid": 65519, 00:21:31.779 "ana_reporting": false 00:21:31.779 } 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "method": "nvmf_subsystem_add_host", 00:21:31.779 "params": { 00:21:31.779 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.779 "host": "nqn.2016-06.io.spdk:host1", 00:21:31.779 "psk": "key0" 00:21:31.779 } 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "method": "nvmf_subsystem_add_ns", 00:21:31.779 "params": { 00:21:31.779 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.779 "namespace": { 00:21:31.779 "nsid": 1, 00:21:31.779 "bdev_name": "malloc0", 00:21:31.779 "nguid": "94771F06E4894437AD652A4334CEF2A3", 00:21:31.779 "uuid": "94771f06-e489-4437-ad65-2a4334cef2a3", 00:21:31.779 "no_auto_visible": false 00:21:31.779 } 00:21:31.779 } 00:21:31.779 }, 00:21:31.779 { 00:21:31.779 "method": "nvmf_subsystem_add_listener", 00:21:31.779 "params": { 00:21:31.779 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.779 "listen_address": { 00:21:31.779 "trtype": "TCP", 00:21:31.779 "adrfam": "IPv4", 00:21:31.779 "traddr": "10.0.0.2", 00:21:31.779 "trsvcid": "4420" 00:21:31.779 }, 00:21:31.779 "secure_channel": false, 00:21:31.779 "sock_impl": "ssl" 00:21:31.779 } 00:21:31.779 } 00:21:31.779 ] 00:21:31.779 } 00:21:31.779 ] 00:21:31.779 }' 00:21:31.779 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.779 20:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1032631 00:21:31.779 20:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1032631 00:21:31.780 20:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:31.780 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1032631 ']' 00:21:31.780 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.780 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.780 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.780 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.780 20:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.780 [2024-07-15 20:16:29.023395] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:21:31.780 [2024-07-15 20:16:29.023451] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.780 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.780 [2024-07-15 20:16:29.087419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.780 [2024-07-15 20:16:29.151833] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.780 [2024-07-15 20:16:29.151868] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.780 [2024-07-15 20:16:29.151875] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.780 [2024-07-15 20:16:29.151882] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.780 [2024-07-15 20:16:29.151887] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.780 [2024-07-15 20:16:29.151937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.039 [2024-07-15 20:16:29.348995] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.039 [2024-07-15 20:16:29.381009] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:32.039 [2024-07-15 20:16:29.394415] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.608 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:32.608 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:32.608 20:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:32.608 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:32.608 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.608 20:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.608 20:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1032792 00:21:32.608 20:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1032792 /var/tmp/bdevperf.sock 00:21:32.608 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1032792 ']' 00:21:32.608 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:32.608 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:32.608 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:32.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:32.608 20:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:32.608 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:32.608 20:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.608 20:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:21:32.608 "subsystems": [ 00:21:32.608 { 00:21:32.608 "subsystem": "keyring", 00:21:32.608 "config": [ 00:21:32.608 { 00:21:32.608 "method": "keyring_file_add_key", 00:21:32.608 "params": { 00:21:32.608 "name": "key0", 00:21:32.608 "path": "/tmp/tmp.trlwBjPZ8X" 00:21:32.608 } 00:21:32.608 } 00:21:32.608 ] 00:21:32.608 }, 00:21:32.608 { 00:21:32.608 "subsystem": "iobuf", 00:21:32.608 "config": [ 00:21:32.608 { 00:21:32.608 "method": "iobuf_set_options", 00:21:32.608 "params": { 00:21:32.608 "small_pool_count": 8192, 00:21:32.608 "large_pool_count": 1024, 00:21:32.608 "small_bufsize": 8192, 00:21:32.608 "large_bufsize": 135168 00:21:32.608 } 00:21:32.608 } 00:21:32.608 ] 00:21:32.608 }, 00:21:32.608 { 00:21:32.608 "subsystem": "sock", 00:21:32.608 "config": [ 00:21:32.608 { 00:21:32.608 "method": "sock_set_default_impl", 00:21:32.608 "params": { 00:21:32.608 "impl_name": "posix" 00:21:32.608 } 00:21:32.608 }, 00:21:32.608 { 00:21:32.608 "method": "sock_impl_set_options", 00:21:32.608 "params": { 00:21:32.608 "impl_name": "ssl", 00:21:32.608 "recv_buf_size": 4096, 00:21:32.608 "send_buf_size": 4096, 00:21:32.608 "enable_recv_pipe": true, 00:21:32.608 "enable_quickack": false, 00:21:32.608 "enable_placement_id": 0, 00:21:32.608 "enable_zerocopy_send_server": true, 00:21:32.608 "enable_zerocopy_send_client": false, 00:21:32.608 "zerocopy_threshold": 0, 00:21:32.608 "tls_version": 0, 00:21:32.608 "enable_ktls": false 00:21:32.608 } 00:21:32.608 }, 00:21:32.608 { 00:21:32.608 "method": "sock_impl_set_options", 00:21:32.608 "params": { 00:21:32.608 "impl_name": "posix", 00:21:32.608 "recv_buf_size": 2097152, 00:21:32.608 "send_buf_size": 2097152, 00:21:32.608 "enable_recv_pipe": true, 00:21:32.608 "enable_quickack": false, 00:21:32.608 "enable_placement_id": 0, 00:21:32.608 "enable_zerocopy_send_server": true, 00:21:32.608 "enable_zerocopy_send_client": false, 00:21:32.608 "zerocopy_threshold": 0, 00:21:32.608 "tls_version": 0, 00:21:32.608 "enable_ktls": false 00:21:32.608 } 00:21:32.608 } 00:21:32.608 ] 00:21:32.608 }, 00:21:32.608 { 00:21:32.608 "subsystem": "vmd", 00:21:32.608 "config": [] 00:21:32.608 }, 00:21:32.608 { 00:21:32.608 "subsystem": "accel", 00:21:32.608 "config": [ 00:21:32.608 { 00:21:32.608 "method": "accel_set_options", 00:21:32.608 "params": { 00:21:32.608 "small_cache_size": 128, 00:21:32.608 "large_cache_size": 16, 00:21:32.608 "task_count": 2048, 00:21:32.608 "sequence_count": 2048, 00:21:32.608 "buf_count": 2048 00:21:32.608 } 00:21:32.609 } 00:21:32.609 ] 00:21:32.609 }, 00:21:32.609 { 00:21:32.609 "subsystem": "bdev", 00:21:32.609 "config": [ 00:21:32.609 { 00:21:32.609 "method": "bdev_set_options", 00:21:32.609 "params": { 00:21:32.609 "bdev_io_pool_size": 65535, 00:21:32.609 "bdev_io_cache_size": 256, 00:21:32.609 "bdev_auto_examine": true, 00:21:32.609 "iobuf_small_cache_size": 128, 00:21:32.609 "iobuf_large_cache_size": 16 00:21:32.609 } 00:21:32.609 }, 00:21:32.609 { 00:21:32.609 "method": "bdev_raid_set_options", 00:21:32.609 "params": { 00:21:32.609 "process_window_size_kb": 1024 00:21:32.609 } 00:21:32.609 }, 00:21:32.609 { 00:21:32.609 "method": "bdev_iscsi_set_options", 00:21:32.609 "params": { 00:21:32.609 "timeout_sec": 30 00:21:32.609 } 00:21:32.609 }, 00:21:32.609 { 00:21:32.609 "method": "bdev_nvme_set_options", 00:21:32.609 "params": { 00:21:32.609 "action_on_timeout": "none", 00:21:32.609 "timeout_us": 0, 00:21:32.609 "timeout_admin_us": 0, 00:21:32.609 "keep_alive_timeout_ms": 10000, 00:21:32.609 "arbitration_burst": 0, 00:21:32.609 "low_priority_weight": 0, 00:21:32.609 "medium_priority_weight": 0, 00:21:32.609 "high_priority_weight": 0, 00:21:32.609 "nvme_adminq_poll_period_us": 10000, 00:21:32.609 "nvme_ioq_poll_period_us": 0, 00:21:32.609 "io_queue_requests": 512, 00:21:32.609 "delay_cmd_submit": true, 00:21:32.609 "transport_retry_count": 4, 00:21:32.609 "bdev_retry_count": 3, 00:21:32.609 "transport_ack_timeout": 0, 00:21:32.609 "ctrlr_loss_timeout_sec": 0, 00:21:32.609 "reconnect_delay_sec": 0, 00:21:32.609 "fast_io_fail_timeout_sec": 0, 00:21:32.609 "disable_auto_failback": false, 00:21:32.609 "generate_uuids": false, 00:21:32.609 "transport_tos": 0, 00:21:32.609 "nvme_error_stat": false, 00:21:32.609 "rdma_srq_size": 0, 00:21:32.609 "io_path_stat": false, 00:21:32.609 "allow_accel_sequence": false, 00:21:32.609 "rdma_max_cq_size": 0, 00:21:32.609 "rdma_cm_event_timeout_ms": 0, 00:21:32.609 "dhchap_digests": [ 00:21:32.609 "sha256", 00:21:32.609 "sha384", 00:21:32.609 "sha512" 00:21:32.609 ], 00:21:32.609 "dhchap_dhgroups": [ 00:21:32.609 "null", 00:21:32.609 "ffdhe2048", 00:21:32.609 "ffdhe3072", 00:21:32.609 "ffdhe4096", 00:21:32.609 "ffdhe6144", 00:21:32.609 "ffdhe8192" 00:21:32.609 ] 00:21:32.609 } 00:21:32.609 }, 00:21:32.609 { 00:21:32.609 "method": "bdev_nvme_attach_controller", 00:21:32.609 "params": { 00:21:32.609 "name": "nvme0", 00:21:32.609 "trtype": "TCP", 00:21:32.609 "adrfam": "IPv4", 00:21:32.609 "traddr": "10.0.0.2", 00:21:32.609 "trsvcid": "4420", 00:21:32.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:32.609 "prchk_reftag": false, 00:21:32.609 "prchk_guard": false, 00:21:32.609 "ctrlr_loss_timeout_sec": 0, 00:21:32.609 "reconnect_delay_sec": 0, 00:21:32.609 "fast_io_fail_timeout_sec": 0, 00:21:32.609 "psk": "key0", 00:21:32.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:32.609 "hdgst": false, 00:21:32.609 "ddgst": false 00:21:32.609 } 00:21:32.609 }, 00:21:32.609 { 00:21:32.609 "method": "bdev_nvme_set_hotplug", 00:21:32.609 "params": { 00:21:32.609 "period_us": 100000, 00:21:32.609 "enable": false 00:21:32.609 } 00:21:32.609 }, 00:21:32.609 { 00:21:32.609 "method": "bdev_enable_histogram", 00:21:32.609 "params": { 00:21:32.609 "name": "nvme0n1", 00:21:32.609 "enable": true 00:21:32.609 } 00:21:32.609 }, 00:21:32.609 { 00:21:32.609 "method": "bdev_wait_for_examine" 00:21:32.609 } 00:21:32.609 ] 00:21:32.609 }, 00:21:32.609 { 00:21:32.609 "subsystem": "nbd", 00:21:32.609 "config": [] 00:21:32.609 } 00:21:32.609 ] 00:21:32.609 }' 00:21:32.609 [2024-07-15 20:16:29.866414] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:21:32.609 [2024-07-15 20:16:29.866467] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032792 ] 00:21:32.609 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.609 [2024-07-15 20:16:29.942298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.609 [2024-07-15 20:16:29.995951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.870 [2024-07-15 20:16:30.130762] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:33.459 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:33.459 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:33.459 20:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:33.459 20:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:21:33.459 20:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.459 20:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:33.720 Running I/O for 1 seconds... 00:21:34.659 00:21:34.659 Latency(us) 00:21:34.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.659 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:34.659 Verification LBA range: start 0x0 length 0x2000 00:21:34.659 nvme0n1 : 1.04 2204.96 8.61 0.00 0.00 57189.38 4751.36 108352.85 00:21:34.659 =================================================================================================================== 00:21:34.659 Total : 2204.96 8.61 0.00 0.00 57189.38 4751.36 108352.85 00:21:34.659 0 00:21:34.659 20:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:21:34.659 20:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:21:34.659 20:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:34.659 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:34.659 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:34.659 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:34.659 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:34.659 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:34.659 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:34.659 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:34.659 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:34.659 nvmf_trace.0 00:21:34.659 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:34.659 20:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1032792 00:21:34.659 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1032792 ']' 00:21:34.659 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1032792 00:21:34.659 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:34.660 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.660 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1032792 00:21:34.660 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:34.660 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:34.660 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1032792' 00:21:34.660 killing process with pid 1032792 00:21:34.660 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1032792 00:21:34.660 Received shutdown signal, test time was about 1.000000 seconds 00:21:34.660 00:21:34.660 Latency(us) 00:21:34.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.660 =================================================================================================================== 00:21:34.660 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.660 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1032792 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:34.920 rmmod nvme_tcp 00:21:34.920 rmmod nvme_fabrics 00:21:34.920 rmmod nvme_keyring 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1032631 ']' 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1032631 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1032631 ']' 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1032631 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1032631 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1032631' 00:21:34.920 killing process with pid 1032631 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1032631 00:21:34.920 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1032631 00:21:35.181 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:35.181 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:35.181 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:35.181 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:35.181 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:35.181 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.181 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.181 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.092 20:16:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:37.092 20:16:34 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.AilF3z35eE /tmp/tmp.FTw5Yymwu5 /tmp/tmp.trlwBjPZ8X 00:21:37.092 00:21:37.092 real 1m23.058s 00:21:37.092 user 2m5.214s 00:21:37.092 sys 0m28.971s 00:21:37.092 20:16:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:37.092 20:16:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.092 ************************************ 00:21:37.092 END TEST nvmf_tls 00:21:37.092 ************************************ 00:21:37.353 20:16:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:37.353 20:16:34 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:37.353 20:16:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:37.353 20:16:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:37.353 20:16:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:37.353 ************************************ 00:21:37.353 START TEST nvmf_fips 00:21:37.354 ************************************ 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:37.354 * Looking for test storage... 00:21:37.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:37.354 20:16:34 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:37.616 Error setting digest 00:21:37.616 00E20578DA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:37.616 00E20578DA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:37.616 20:16:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:45.762 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:45.762 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:45.762 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:45.762 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.762 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.763 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.763 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.763 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:45.763 20:16:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:45.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:21:45.763 00:21:45.763 --- 10.0.0.2 ping statistics --- 00:21:45.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.763 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.390 ms 00:21:45.763 00:21:45.763 --- 10.0.0.1 ping statistics --- 00:21:45.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.763 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1037434 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1037434 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1037434 ']' 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:45.763 [2024-07-15 20:16:42.194513] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:21:45.763 [2024-07-15 20:16:42.194581] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.763 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.763 [2024-07-15 20:16:42.281972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.763 [2024-07-15 20:16:42.374525] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.763 [2024-07-15 20:16:42.374583] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.763 [2024-07-15 20:16:42.374592] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.763 [2024-07-15 20:16:42.374599] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.763 [2024-07-15 20:16:42.374605] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.763 [2024-07-15 20:16:42.374631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:45.763 20:16:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:45.763 20:16:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.763 20:16:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:45.763 20:16:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:45.763 20:16:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:45.763 20:16:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:45.763 20:16:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:45.763 20:16:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:45.763 20:16:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:45.763 20:16:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:45.763 [2024-07-15 20:16:43.168333] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.763 [2024-07-15 20:16:43.184320] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:45.763 [2024-07-15 20:16:43.184586] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.025 [2024-07-15 20:16:43.214487] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:46.025 malloc0 00:21:46.025 20:16:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.025 20:16:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1037706 00:21:46.025 20:16:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1037706 /var/tmp/bdevperf.sock 00:21:46.025 20:16:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:46.025 20:16:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1037706 ']' 00:21:46.025 20:16:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.025 20:16:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.025 20:16:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.025 20:16:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.025 20:16:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:46.025 [2024-07-15 20:16:43.318000] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:21:46.025 [2024-07-15 20:16:43.318071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1037706 ] 00:21:46.025 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.025 [2024-07-15 20:16:43.372838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.025 [2024-07-15 20:16:43.437224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.967 20:16:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.967 20:16:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:46.967 20:16:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:46.967 [2024-07-15 20:16:44.212826] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:46.967 [2024-07-15 20:16:44.212888] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:46.967 TLSTESTn1 00:21:46.967 20:16:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:46.967 Running I/O for 10 seconds... 00:21:59.198 00:21:59.198 Latency(us) 00:21:59.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.198 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:59.198 Verification LBA range: start 0x0 length 0x2000 00:21:59.198 TLSTESTn1 : 10.07 2511.84 9.81 0.00 0.00 50783.48 6034.77 100488.53 00:21:59.198 =================================================================================================================== 00:21:59.198 Total : 2511.84 9.81 0.00 0.00 50783.48 6034.77 100488.53 00:21:59.198 0 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:59.198 nvmf_trace.0 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1037706 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1037706 ']' 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1037706 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1037706 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1037706' 00:21:59.198 killing process with pid 1037706 00:21:59.198 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1037706 00:21:59.198 Received shutdown signal, test time was about 10.000000 seconds 00:21:59.198 00:21:59.199 Latency(us) 00:21:59.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.199 =================================================================================================================== 00:21:59.199 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:59.199 [2024-07-15 20:16:54.652670] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1037706 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:59.199 rmmod nvme_tcp 00:21:59.199 rmmod nvme_fabrics 00:21:59.199 rmmod nvme_keyring 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1037434 ']' 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1037434 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1037434 ']' 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1037434 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1037434 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1037434' 00:21:59.199 killing process with pid 1037434 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1037434 00:21:59.199 [2024-07-15 20:16:54.881582] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1037434 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.199 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.770 20:16:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:59.770 20:16:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:59.770 00:21:59.770 real 0m22.484s 00:21:59.770 user 0m22.930s 00:21:59.770 sys 0m10.233s 00:21:59.770 20:16:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:59.770 20:16:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:59.770 ************************************ 00:21:59.770 END TEST nvmf_fips 00:21:59.770 ************************************ 00:21:59.770 20:16:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:59.770 20:16:57 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:59.770 20:16:57 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:59.770 20:16:57 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:59.770 20:16:57 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:59.770 20:16:57 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:59.770 20:16:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:07.950 20:17:03 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.950 20:17:03 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:07.950 20:17:03 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:07.950 20:17:03 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:07.951 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:07.951 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:07.951 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:07.951 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:22:07.951 20:17:03 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:07.951 20:17:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:07.951 20:17:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:07.951 20:17:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:07.951 ************************************ 00:22:07.951 START TEST nvmf_perf_adq 00:22:07.951 ************************************ 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:07.951 * Looking for test storage... 00:22:07.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:07.951 20:17:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:14.536 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:14.536 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:14.536 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:14.536 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.537 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.537 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.537 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.537 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.537 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.537 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.537 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.537 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:14.537 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:14.537 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.537 20:17:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:14.537 20:17:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.537 20:17:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:14.537 20:17:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:14.537 20:17:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:14.537 20:17:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:15.479 20:17:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:17.393 20:17:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:22.683 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:22.683 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.683 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:22.684 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:22.684 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:22.684 20:17:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.684 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.684 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.684 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:22.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:22:22.684 00:22:22.684 --- 10.0.0.2 ping statistics --- 00:22:22.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.684 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:22:22.684 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.398 ms 00:22:22.684 00:22:22.684 --- 10.0.0.1 ping statistics --- 00:22:22.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.684 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:22:22.684 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.684 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:22.684 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:22.684 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.684 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:22.684 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:22.684 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.684 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:22.684 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:22.684 20:17:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:22.684 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:22.684 20:17:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:22.684 20:17:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.945 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1049586 00:22:22.945 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1049586 00:22:22.945 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:22.945 20:17:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1049586 ']' 00:22:22.945 20:17:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.945 20:17:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.945 20:17:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.945 20:17:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.945 20:17:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.945 [2024-07-15 20:17:20.172517] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:22:22.945 [2024-07-15 20:17:20.172591] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.945 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.945 [2024-07-15 20:17:20.245817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:22.945 [2024-07-15 20:17:20.322821] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.945 [2024-07-15 20:17:20.322862] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.945 [2024-07-15 20:17:20.322870] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.945 [2024-07-15 20:17:20.322877] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.945 [2024-07-15 20:17:20.322882] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.945 [2024-07-15 20:17:20.323029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.945 [2024-07-15 20:17:20.323146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.945 [2024-07-15 20:17:20.323282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.945 [2024-07-15 20:17:20.323284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.887 20:17:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.887 20:17:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:23.887 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:23.887 20:17:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:23.887 20:17:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.887 20:17:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.887 20:17:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:23.887 20:17:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:23.887 20:17:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:23.887 20:17:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.887 [2024-07-15 20:17:21.138186] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.887 Malloc1 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.887 [2024-07-15 20:17:21.197589] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1049863 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:23.887 20:17:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:23.887 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.808 20:17:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:25.808 20:17:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.808 20:17:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.808 20:17:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.808 20:17:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:25.808 "tick_rate": 2400000000, 00:22:25.808 "poll_groups": [ 00:22:25.808 { 00:22:25.808 "name": "nvmf_tgt_poll_group_000", 00:22:25.808 "admin_qpairs": 1, 00:22:25.808 "io_qpairs": 1, 00:22:25.808 "current_admin_qpairs": 1, 00:22:25.808 "current_io_qpairs": 1, 00:22:25.808 "pending_bdev_io": 0, 00:22:25.808 "completed_nvme_io": 19755, 00:22:25.808 "transports": [ 00:22:25.808 { 00:22:25.808 "trtype": "TCP" 00:22:25.808 } 00:22:25.808 ] 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "name": "nvmf_tgt_poll_group_001", 00:22:25.808 "admin_qpairs": 0, 00:22:25.808 "io_qpairs": 1, 00:22:25.808 "current_admin_qpairs": 0, 00:22:25.808 "current_io_qpairs": 1, 00:22:25.808 "pending_bdev_io": 0, 00:22:25.808 "completed_nvme_io": 28890, 00:22:25.808 "transports": [ 00:22:25.808 { 00:22:25.808 "trtype": "TCP" 00:22:25.808 } 00:22:25.808 ] 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "name": "nvmf_tgt_poll_group_002", 00:22:25.808 "admin_qpairs": 0, 00:22:25.808 "io_qpairs": 1, 00:22:25.808 "current_admin_qpairs": 0, 00:22:25.808 "current_io_qpairs": 1, 00:22:25.808 "pending_bdev_io": 0, 00:22:25.808 "completed_nvme_io": 20594, 00:22:25.808 "transports": [ 00:22:25.808 { 00:22:25.808 "trtype": "TCP" 00:22:25.808 } 00:22:25.808 ] 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "name": "nvmf_tgt_poll_group_003", 00:22:25.808 "admin_qpairs": 0, 00:22:25.808 "io_qpairs": 1, 00:22:25.808 "current_admin_qpairs": 0, 00:22:25.808 "current_io_qpairs": 1, 00:22:25.808 "pending_bdev_io": 0, 00:22:25.808 "completed_nvme_io": 20207, 00:22:25.808 "transports": [ 00:22:25.808 { 00:22:25.808 "trtype": "TCP" 00:22:25.808 } 00:22:25.808 ] 00:22:25.808 } 00:22:25.808 ] 00:22:25.808 }' 00:22:25.808 20:17:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:25.808 20:17:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:26.070 20:17:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:26.070 20:17:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:26.070 20:17:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1049863 00:22:34.205 Initializing NVMe Controllers 00:22:34.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:34.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:34.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:34.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:34.205 Initialization complete. Launching workers. 00:22:34.205 ======================================================== 00:22:34.205 Latency(us) 00:22:34.205 Device Information : IOPS MiB/s Average min max 00:22:34.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11280.00 44.06 5673.56 1592.55 9526.97 00:22:34.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15106.60 59.01 4236.22 1578.46 11267.21 00:22:34.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14272.70 55.75 4484.11 1433.52 11582.12 00:22:34.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13047.30 50.97 4905.47 1242.86 9449.04 00:22:34.205 ======================================================== 00:22:34.205 Total : 53706.59 209.79 4766.57 1242.86 11582.12 00:22:34.205 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:34.205 rmmod nvme_tcp 00:22:34.205 rmmod nvme_fabrics 00:22:34.205 rmmod nvme_keyring 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1049586 ']' 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1049586 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1049586 ']' 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1049586 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1049586 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1049586' 00:22:34.205 killing process with pid 1049586 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1049586 00:22:34.205 20:17:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1049586 00:22:34.466 20:17:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:34.466 20:17:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:34.466 20:17:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:34.466 20:17:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:34.466 20:17:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:34.467 20:17:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.467 20:17:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.467 20:17:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.378 20:17:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:36.378 20:17:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:36.378 20:17:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:38.289 20:17:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:40.273 20:17:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:45.557 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:45.557 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:45.557 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:45.557 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.557 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:45.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:22:45.558 00:22:45.558 --- 10.0.0.2 ping statistics --- 00:22:45.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.558 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:22:45.558 00:22:45.558 --- 10.0.0.1 ping statistics --- 00:22:45.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.558 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:45.558 net.core.busy_poll = 1 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:45.558 net.core.busy_read = 1 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1054409 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1054409 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1054409 ']' 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:45.558 20:17:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.558 [2024-07-15 20:17:42.915955] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:22:45.558 [2024-07-15 20:17:42.916011] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.558 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.558 [2024-07-15 20:17:42.984557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:45.817 [2024-07-15 20:17:43.053101] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.817 [2024-07-15 20:17:43.053144] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.817 [2024-07-15 20:17:43.053152] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.817 [2024-07-15 20:17:43.053158] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.817 [2024-07-15 20:17:43.053163] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.817 [2024-07-15 20:17:43.053374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.817 [2024-07-15 20:17:43.053543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.817 [2024-07-15 20:17:43.053700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.817 [2024-07-15 20:17:43.053699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.389 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.649 [2024-07-15 20:17:43.871184] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.649 Malloc1 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.649 [2024-07-15 20:17:43.930527] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1054761 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:46.649 20:17:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:46.649 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.561 20:17:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:48.561 20:17:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.561 20:17:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.561 20:17:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.561 20:17:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:48.561 "tick_rate": 2400000000, 00:22:48.561 "poll_groups": [ 00:22:48.561 { 00:22:48.561 "name": "nvmf_tgt_poll_group_000", 00:22:48.561 "admin_qpairs": 1, 00:22:48.561 "io_qpairs": 3, 00:22:48.561 "current_admin_qpairs": 1, 00:22:48.561 "current_io_qpairs": 3, 00:22:48.561 "pending_bdev_io": 0, 00:22:48.561 "completed_nvme_io": 28782, 00:22:48.561 "transports": [ 00:22:48.561 { 00:22:48.561 "trtype": "TCP" 00:22:48.561 } 00:22:48.561 ] 00:22:48.561 }, 00:22:48.561 { 00:22:48.561 "name": "nvmf_tgt_poll_group_001", 00:22:48.561 "admin_qpairs": 0, 00:22:48.561 "io_qpairs": 1, 00:22:48.561 "current_admin_qpairs": 0, 00:22:48.561 "current_io_qpairs": 1, 00:22:48.561 "pending_bdev_io": 0, 00:22:48.561 "completed_nvme_io": 40553, 00:22:48.561 "transports": [ 00:22:48.561 { 00:22:48.561 "trtype": "TCP" 00:22:48.561 } 00:22:48.561 ] 00:22:48.561 }, 00:22:48.561 { 00:22:48.561 "name": "nvmf_tgt_poll_group_002", 00:22:48.561 "admin_qpairs": 0, 00:22:48.561 "io_qpairs": 0, 00:22:48.561 "current_admin_qpairs": 0, 00:22:48.561 "current_io_qpairs": 0, 00:22:48.561 "pending_bdev_io": 0, 00:22:48.561 "completed_nvme_io": 0, 00:22:48.561 "transports": [ 00:22:48.561 { 00:22:48.561 "trtype": "TCP" 00:22:48.561 } 00:22:48.561 ] 00:22:48.561 }, 00:22:48.561 { 00:22:48.561 "name": "nvmf_tgt_poll_group_003", 00:22:48.561 "admin_qpairs": 0, 00:22:48.561 "io_qpairs": 0, 00:22:48.561 "current_admin_qpairs": 0, 00:22:48.561 "current_io_qpairs": 0, 00:22:48.561 "pending_bdev_io": 0, 00:22:48.561 "completed_nvme_io": 0, 00:22:48.561 "transports": [ 00:22:48.561 { 00:22:48.561 "trtype": "TCP" 00:22:48.561 } 00:22:48.561 ] 00:22:48.561 } 00:22:48.561 ] 00:22:48.561 }' 00:22:48.561 20:17:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:48.561 20:17:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:48.822 20:17:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:48.822 20:17:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:48.822 20:17:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1054761 00:22:56.958 Initializing NVMe Controllers 00:22:56.958 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:56.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:56.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:56.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:56.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:56.958 Initialization complete. Launching workers. 00:22:56.958 ======================================================== 00:22:56.958 Latency(us) 00:22:56.958 Device Information : IOPS MiB/s Average min max 00:22:56.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5319.00 20.78 12033.29 1590.28 60762.50 00:22:56.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 20372.80 79.58 3141.08 987.85 45728.80 00:22:56.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6169.70 24.10 10406.18 1558.89 59888.68 00:22:56.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8739.20 34.14 7343.18 915.46 55822.52 00:22:56.958 ======================================================== 00:22:56.958 Total : 40600.69 158.60 6314.53 915.46 60762.50 00:22:56.958 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:56.958 rmmod nvme_tcp 00:22:56.958 rmmod nvme_fabrics 00:22:56.958 rmmod nvme_keyring 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1054409 ']' 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1054409 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1054409 ']' 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1054409 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1054409 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1054409' 00:22:56.958 killing process with pid 1054409 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1054409 00:22:56.958 20:17:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1054409 00:22:57.219 20:17:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:57.219 20:17:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:57.219 20:17:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:57.219 20:17:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:57.219 20:17:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:57.219 20:17:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.219 20:17:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:57.219 20:17:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.517 20:17:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:00.517 20:17:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:00.517 00:23:00.517 real 0m53.462s 00:23:00.517 user 2m49.654s 00:23:00.517 sys 0m10.795s 00:23:00.517 20:17:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:00.517 20:17:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:00.517 ************************************ 00:23:00.517 END TEST nvmf_perf_adq 00:23:00.517 ************************************ 00:23:00.517 20:17:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:00.517 20:17:57 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:00.517 20:17:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:00.517 20:17:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:00.517 20:17:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:00.517 ************************************ 00:23:00.517 START TEST nvmf_shutdown 00:23:00.517 ************************************ 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:00.517 * Looking for test storage... 00:23:00.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.517 20:17:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:00.518 ************************************ 00:23:00.518 START TEST nvmf_shutdown_tc1 00:23:00.518 ************************************ 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:00.518 20:17:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:08.659 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:08.659 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:08.659 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:08.659 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:08.659 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:08.659 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:08.659 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:08.659 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:08.659 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:08.659 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:08.659 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:08.659 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:08.660 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:08.660 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:08.660 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:08.660 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:08.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:23:08.660 00:23:08.660 --- 10.0.0.2 ping statistics --- 00:23:08.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.660 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:23:08.660 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:08.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:23:08.661 00:23:08.661 --- 10.0.0.1 ping statistics --- 00:23:08.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.661 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:23:08.661 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.661 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:08.661 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:08.661 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.661 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:08.661 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:08.661 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.661 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:08.661 20:18:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1061324 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1061324 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1061324 ']' 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:08.661 [2024-07-15 20:18:05.104753] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:23:08.661 [2024-07-15 20:18:05.104802] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.661 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.661 [2024-07-15 20:18:05.187728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:08.661 [2024-07-15 20:18:05.263794] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.661 [2024-07-15 20:18:05.263851] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.661 [2024-07-15 20:18:05.263860] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.661 [2024-07-15 20:18:05.263866] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.661 [2024-07-15 20:18:05.263872] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.661 [2024-07-15 20:18:05.264011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.661 [2024-07-15 20:18:05.264188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:08.661 [2024-07-15 20:18:05.264360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.661 [2024-07-15 20:18:05.264360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:08.661 [2024-07-15 20:18:05.920708] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.661 20:18:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:08.661 Malloc1 00:23:08.661 [2024-07-15 20:18:06.024225] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.661 Malloc2 00:23:08.661 Malloc3 00:23:08.922 Malloc4 00:23:08.922 Malloc5 00:23:08.922 Malloc6 00:23:08.922 Malloc7 00:23:08.922 Malloc8 00:23:08.922 Malloc9 00:23:09.184 Malloc10 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1061562 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1061562 /var/tmp/bdevperf.sock 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1061562 ']' 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.184 { 00:23:09.184 "params": { 00:23:09.184 "name": "Nvme$subsystem", 00:23:09.184 "trtype": "$TEST_TRANSPORT", 00:23:09.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.184 "adrfam": "ipv4", 00:23:09.184 "trsvcid": "$NVMF_PORT", 00:23:09.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.184 "hdgst": ${hdgst:-false}, 00:23:09.184 "ddgst": ${ddgst:-false} 00:23:09.184 }, 00:23:09.184 "method": "bdev_nvme_attach_controller" 00:23:09.184 } 00:23:09.184 EOF 00:23:09.184 )") 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.184 { 00:23:09.184 "params": { 00:23:09.184 "name": "Nvme$subsystem", 00:23:09.184 "trtype": "$TEST_TRANSPORT", 00:23:09.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.184 "adrfam": "ipv4", 00:23:09.184 "trsvcid": "$NVMF_PORT", 00:23:09.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.184 "hdgst": ${hdgst:-false}, 00:23:09.184 "ddgst": ${ddgst:-false} 00:23:09.184 }, 00:23:09.184 "method": "bdev_nvme_attach_controller" 00:23:09.184 } 00:23:09.184 EOF 00:23:09.184 )") 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.184 { 00:23:09.184 "params": { 00:23:09.184 "name": "Nvme$subsystem", 00:23:09.184 "trtype": "$TEST_TRANSPORT", 00:23:09.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.184 "adrfam": "ipv4", 00:23:09.184 "trsvcid": "$NVMF_PORT", 00:23:09.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.184 "hdgst": ${hdgst:-false}, 00:23:09.184 "ddgst": ${ddgst:-false} 00:23:09.184 }, 00:23:09.184 "method": "bdev_nvme_attach_controller" 00:23:09.184 } 00:23:09.184 EOF 00:23:09.184 )") 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.184 { 00:23:09.184 "params": { 00:23:09.184 "name": "Nvme$subsystem", 00:23:09.184 "trtype": "$TEST_TRANSPORT", 00:23:09.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.184 "adrfam": "ipv4", 00:23:09.184 "trsvcid": "$NVMF_PORT", 00:23:09.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.184 "hdgst": ${hdgst:-false}, 00:23:09.184 "ddgst": ${ddgst:-false} 00:23:09.184 }, 00:23:09.184 "method": "bdev_nvme_attach_controller" 00:23:09.184 } 00:23:09.184 EOF 00:23:09.184 )") 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.184 { 00:23:09.184 "params": { 00:23:09.184 "name": "Nvme$subsystem", 00:23:09.184 "trtype": "$TEST_TRANSPORT", 00:23:09.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.184 "adrfam": "ipv4", 00:23:09.184 "trsvcid": "$NVMF_PORT", 00:23:09.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.184 "hdgst": ${hdgst:-false}, 00:23:09.184 "ddgst": ${ddgst:-false} 00:23:09.184 }, 00:23:09.184 "method": "bdev_nvme_attach_controller" 00:23:09.184 } 00:23:09.184 EOF 00:23:09.184 )") 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.184 { 00:23:09.184 "params": { 00:23:09.184 "name": "Nvme$subsystem", 00:23:09.184 "trtype": "$TEST_TRANSPORT", 00:23:09.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.184 "adrfam": "ipv4", 00:23:09.184 "trsvcid": "$NVMF_PORT", 00:23:09.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.184 "hdgst": ${hdgst:-false}, 00:23:09.184 "ddgst": ${ddgst:-false} 00:23:09.184 }, 00:23:09.184 "method": "bdev_nvme_attach_controller" 00:23:09.184 } 00:23:09.184 EOF 00:23:09.184 )") 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.184 [2024-07-15 20:18:06.471519] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:23:09.184 [2024-07-15 20:18:06.471572] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.184 { 00:23:09.184 "params": { 00:23:09.184 "name": "Nvme$subsystem", 00:23:09.184 "trtype": "$TEST_TRANSPORT", 00:23:09.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.184 "adrfam": "ipv4", 00:23:09.184 "trsvcid": "$NVMF_PORT", 00:23:09.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.184 "hdgst": ${hdgst:-false}, 00:23:09.184 "ddgst": ${ddgst:-false} 00:23:09.184 }, 00:23:09.184 "method": "bdev_nvme_attach_controller" 00:23:09.184 } 00:23:09.184 EOF 00:23:09.184 )") 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.184 { 00:23:09.184 "params": { 00:23:09.184 "name": "Nvme$subsystem", 00:23:09.184 "trtype": "$TEST_TRANSPORT", 00:23:09.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.184 "adrfam": "ipv4", 00:23:09.184 "trsvcid": "$NVMF_PORT", 00:23:09.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.184 "hdgst": ${hdgst:-false}, 00:23:09.184 "ddgst": ${ddgst:-false} 00:23:09.184 }, 00:23:09.184 "method": "bdev_nvme_attach_controller" 00:23:09.184 } 00:23:09.184 EOF 00:23:09.184 )") 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.184 { 00:23:09.184 "params": { 00:23:09.184 "name": "Nvme$subsystem", 00:23:09.184 "trtype": "$TEST_TRANSPORT", 00:23:09.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.184 "adrfam": "ipv4", 00:23:09.184 "trsvcid": "$NVMF_PORT", 00:23:09.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.184 "hdgst": ${hdgst:-false}, 00:23:09.184 "ddgst": ${ddgst:-false} 00:23:09.184 }, 00:23:09.184 "method": "bdev_nvme_attach_controller" 00:23:09.184 } 00:23:09.184 EOF 00:23:09.184 )") 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.184 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.184 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.185 { 00:23:09.185 "params": { 00:23:09.185 "name": "Nvme$subsystem", 00:23:09.185 "trtype": "$TEST_TRANSPORT", 00:23:09.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.185 "adrfam": "ipv4", 00:23:09.185 "trsvcid": "$NVMF_PORT", 00:23:09.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.185 "hdgst": ${hdgst:-false}, 00:23:09.185 "ddgst": ${ddgst:-false} 00:23:09.185 }, 00:23:09.185 "method": "bdev_nvme_attach_controller" 00:23:09.185 } 00:23:09.185 EOF 00:23:09.185 )") 00:23:09.185 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.185 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:09.185 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:09.185 20:18:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:09.185 "params": { 00:23:09.185 "name": "Nvme1", 00:23:09.185 "trtype": "tcp", 00:23:09.185 "traddr": "10.0.0.2", 00:23:09.185 "adrfam": "ipv4", 00:23:09.185 "trsvcid": "4420", 00:23:09.185 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.185 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.185 "hdgst": false, 00:23:09.185 "ddgst": false 00:23:09.185 }, 00:23:09.185 "method": "bdev_nvme_attach_controller" 00:23:09.185 },{ 00:23:09.185 "params": { 00:23:09.185 "name": "Nvme2", 00:23:09.185 "trtype": "tcp", 00:23:09.185 "traddr": "10.0.0.2", 00:23:09.185 "adrfam": "ipv4", 00:23:09.185 "trsvcid": "4420", 00:23:09.185 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:09.185 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:09.185 "hdgst": false, 00:23:09.185 "ddgst": false 00:23:09.185 }, 00:23:09.185 "method": "bdev_nvme_attach_controller" 00:23:09.185 },{ 00:23:09.185 "params": { 00:23:09.185 "name": "Nvme3", 00:23:09.185 "trtype": "tcp", 00:23:09.185 "traddr": "10.0.0.2", 00:23:09.185 "adrfam": "ipv4", 00:23:09.185 "trsvcid": "4420", 00:23:09.185 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:09.185 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:09.185 "hdgst": false, 00:23:09.185 "ddgst": false 00:23:09.185 }, 00:23:09.185 "method": "bdev_nvme_attach_controller" 00:23:09.185 },{ 00:23:09.185 "params": { 00:23:09.185 "name": "Nvme4", 00:23:09.185 "trtype": "tcp", 00:23:09.185 "traddr": "10.0.0.2", 00:23:09.185 "adrfam": "ipv4", 00:23:09.185 "trsvcid": "4420", 00:23:09.185 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:09.185 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:09.185 "hdgst": false, 00:23:09.185 "ddgst": false 00:23:09.185 }, 00:23:09.185 "method": "bdev_nvme_attach_controller" 00:23:09.185 },{ 00:23:09.185 "params": { 00:23:09.185 "name": "Nvme5", 00:23:09.185 "trtype": "tcp", 00:23:09.185 "traddr": "10.0.0.2", 00:23:09.185 "adrfam": "ipv4", 00:23:09.185 "trsvcid": "4420", 00:23:09.185 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:09.185 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:09.185 "hdgst": false, 00:23:09.185 "ddgst": false 00:23:09.185 }, 00:23:09.185 "method": "bdev_nvme_attach_controller" 00:23:09.185 },{ 00:23:09.185 "params": { 00:23:09.185 "name": "Nvme6", 00:23:09.185 "trtype": "tcp", 00:23:09.185 "traddr": "10.0.0.2", 00:23:09.185 "adrfam": "ipv4", 00:23:09.185 "trsvcid": "4420", 00:23:09.185 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:09.185 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:09.185 "hdgst": false, 00:23:09.185 "ddgst": false 00:23:09.185 }, 00:23:09.185 "method": "bdev_nvme_attach_controller" 00:23:09.185 },{ 00:23:09.185 "params": { 00:23:09.185 "name": "Nvme7", 00:23:09.185 "trtype": "tcp", 00:23:09.185 "traddr": "10.0.0.2", 00:23:09.185 "adrfam": "ipv4", 00:23:09.185 "trsvcid": "4420", 00:23:09.185 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:09.185 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:09.185 "hdgst": false, 00:23:09.185 "ddgst": false 00:23:09.185 }, 00:23:09.185 "method": "bdev_nvme_attach_controller" 00:23:09.185 },{ 00:23:09.185 "params": { 00:23:09.185 "name": "Nvme8", 00:23:09.185 "trtype": "tcp", 00:23:09.185 "traddr": "10.0.0.2", 00:23:09.185 "adrfam": "ipv4", 00:23:09.185 "trsvcid": "4420", 00:23:09.185 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:09.185 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:09.185 "hdgst": false, 00:23:09.185 "ddgst": false 00:23:09.185 }, 00:23:09.185 "method": "bdev_nvme_attach_controller" 00:23:09.185 },{ 00:23:09.185 "params": { 00:23:09.185 "name": "Nvme9", 00:23:09.185 "trtype": "tcp", 00:23:09.185 "traddr": "10.0.0.2", 00:23:09.185 "adrfam": "ipv4", 00:23:09.185 "trsvcid": "4420", 00:23:09.185 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:09.185 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:09.185 "hdgst": false, 00:23:09.185 "ddgst": false 00:23:09.185 }, 00:23:09.185 "method": "bdev_nvme_attach_controller" 00:23:09.185 },{ 00:23:09.185 "params": { 00:23:09.185 "name": "Nvme10", 00:23:09.185 "trtype": "tcp", 00:23:09.185 "traddr": "10.0.0.2", 00:23:09.185 "adrfam": "ipv4", 00:23:09.185 "trsvcid": "4420", 00:23:09.185 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:09.185 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:09.185 "hdgst": false, 00:23:09.185 "ddgst": false 00:23:09.185 }, 00:23:09.185 "method": "bdev_nvme_attach_controller" 00:23:09.185 }' 00:23:09.185 [2024-07-15 20:18:06.532645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.185 [2024-07-15 20:18:06.597641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.095 20:18:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.095 20:18:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:11.095 20:18:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:11.095 20:18:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.095 20:18:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:11.095 20:18:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.095 20:18:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1061562 00:23:11.095 20:18:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:11.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1061562 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:11.095 20:18:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:11.665 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1061324 00:23:11.665 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:11.665 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:11.665 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:11.665 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:11.665 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.665 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.665 { 00:23:11.665 "params": { 00:23:11.665 "name": "Nvme$subsystem", 00:23:11.665 "trtype": "$TEST_TRANSPORT", 00:23:11.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.665 "adrfam": "ipv4", 00:23:11.665 "trsvcid": "$NVMF_PORT", 00:23:11.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.665 "hdgst": ${hdgst:-false}, 00:23:11.665 "ddgst": ${ddgst:-false} 00:23:11.665 }, 00:23:11.665 "method": "bdev_nvme_attach_controller" 00:23:11.665 } 00:23:11.665 EOF 00:23:11.665 )") 00:23:11.665 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:11.665 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.665 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.665 { 00:23:11.665 "params": { 00:23:11.665 "name": "Nvme$subsystem", 00:23:11.665 "trtype": "$TEST_TRANSPORT", 00:23:11.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.665 "adrfam": "ipv4", 00:23:11.665 "trsvcid": "$NVMF_PORT", 00:23:11.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.665 "hdgst": ${hdgst:-false}, 00:23:11.665 "ddgst": ${ddgst:-false} 00:23:11.665 }, 00:23:11.665 "method": "bdev_nvme_attach_controller" 00:23:11.665 } 00:23:11.665 EOF 00:23:11.665 )") 00:23:11.665 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:11.665 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.665 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.665 { 00:23:11.665 "params": { 00:23:11.665 "name": "Nvme$subsystem", 00:23:11.665 "trtype": "$TEST_TRANSPORT", 00:23:11.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.665 "adrfam": "ipv4", 00:23:11.665 "trsvcid": "$NVMF_PORT", 00:23:11.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.665 "hdgst": ${hdgst:-false}, 00:23:11.665 "ddgst": ${ddgst:-false} 00:23:11.665 }, 00:23:11.665 "method": "bdev_nvme_attach_controller" 00:23:11.665 } 00:23:11.665 EOF 00:23:11.665 )") 00:23:11.665 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:11.665 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.665 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.665 { 00:23:11.665 "params": { 00:23:11.665 "name": "Nvme$subsystem", 00:23:11.665 "trtype": "$TEST_TRANSPORT", 00:23:11.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.665 "adrfam": "ipv4", 00:23:11.665 "trsvcid": "$NVMF_PORT", 00:23:11.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.665 "hdgst": ${hdgst:-false}, 00:23:11.665 "ddgst": ${ddgst:-false} 00:23:11.665 }, 00:23:11.665 "method": "bdev_nvme_attach_controller" 00:23:11.665 } 00:23:11.665 EOF 00:23:11.665 )") 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.976 { 00:23:11.976 "params": { 00:23:11.976 "name": "Nvme$subsystem", 00:23:11.976 "trtype": "$TEST_TRANSPORT", 00:23:11.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.976 "adrfam": "ipv4", 00:23:11.976 "trsvcid": "$NVMF_PORT", 00:23:11.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.976 "hdgst": ${hdgst:-false}, 00:23:11.976 "ddgst": ${ddgst:-false} 00:23:11.976 }, 00:23:11.976 "method": "bdev_nvme_attach_controller" 00:23:11.976 } 00:23:11.976 EOF 00:23:11.976 )") 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.976 { 00:23:11.976 "params": { 00:23:11.976 "name": "Nvme$subsystem", 00:23:11.976 "trtype": "$TEST_TRANSPORT", 00:23:11.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.976 "adrfam": "ipv4", 00:23:11.976 "trsvcid": "$NVMF_PORT", 00:23:11.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.976 "hdgst": ${hdgst:-false}, 00:23:11.976 "ddgst": ${ddgst:-false} 00:23:11.976 }, 00:23:11.976 "method": "bdev_nvme_attach_controller" 00:23:11.976 } 00:23:11.976 EOF 00:23:11.976 )") 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:11.976 [2024-07-15 20:18:09.115421] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:23:11.976 [2024-07-15 20:18:09.115473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1062083 ] 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.976 { 00:23:11.976 "params": { 00:23:11.976 "name": "Nvme$subsystem", 00:23:11.976 "trtype": "$TEST_TRANSPORT", 00:23:11.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.976 "adrfam": "ipv4", 00:23:11.976 "trsvcid": "$NVMF_PORT", 00:23:11.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.976 "hdgst": ${hdgst:-false}, 00:23:11.976 "ddgst": ${ddgst:-false} 00:23:11.976 }, 00:23:11.976 "method": "bdev_nvme_attach_controller" 00:23:11.976 } 00:23:11.976 EOF 00:23:11.976 )") 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.976 { 00:23:11.976 "params": { 00:23:11.976 "name": "Nvme$subsystem", 00:23:11.976 "trtype": "$TEST_TRANSPORT", 00:23:11.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.976 "adrfam": "ipv4", 00:23:11.976 "trsvcid": "$NVMF_PORT", 00:23:11.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.976 "hdgst": ${hdgst:-false}, 00:23:11.976 "ddgst": ${ddgst:-false} 00:23:11.976 }, 00:23:11.976 "method": "bdev_nvme_attach_controller" 00:23:11.976 } 00:23:11.976 EOF 00:23:11.976 )") 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.976 { 00:23:11.976 "params": { 00:23:11.976 "name": "Nvme$subsystem", 00:23:11.976 "trtype": "$TEST_TRANSPORT", 00:23:11.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.976 "adrfam": "ipv4", 00:23:11.976 "trsvcid": "$NVMF_PORT", 00:23:11.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.976 "hdgst": ${hdgst:-false}, 00:23:11.976 "ddgst": ${ddgst:-false} 00:23:11.976 }, 00:23:11.976 "method": "bdev_nvme_attach_controller" 00:23:11.976 } 00:23:11.976 EOF 00:23:11.976 )") 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.976 { 00:23:11.976 "params": { 00:23:11.976 "name": "Nvme$subsystem", 00:23:11.976 "trtype": "$TEST_TRANSPORT", 00:23:11.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.976 "adrfam": "ipv4", 00:23:11.976 "trsvcid": "$NVMF_PORT", 00:23:11.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.976 "hdgst": ${hdgst:-false}, 00:23:11.976 "ddgst": ${ddgst:-false} 00:23:11.976 }, 00:23:11.976 "method": "bdev_nvme_attach_controller" 00:23:11.976 } 00:23:11.976 EOF 00:23:11.976 )") 00:23:11.976 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:11.976 20:18:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:11.976 "params": { 00:23:11.976 "name": "Nvme1", 00:23:11.976 "trtype": "tcp", 00:23:11.976 "traddr": "10.0.0.2", 00:23:11.976 "adrfam": "ipv4", 00:23:11.976 "trsvcid": "4420", 00:23:11.976 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.976 "hdgst": false, 00:23:11.976 "ddgst": false 00:23:11.976 }, 00:23:11.976 "method": "bdev_nvme_attach_controller" 00:23:11.976 },{ 00:23:11.976 "params": { 00:23:11.976 "name": "Nvme2", 00:23:11.976 "trtype": "tcp", 00:23:11.976 "traddr": "10.0.0.2", 00:23:11.976 "adrfam": "ipv4", 00:23:11.976 "trsvcid": "4420", 00:23:11.976 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:11.976 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:11.976 "hdgst": false, 00:23:11.976 "ddgst": false 00:23:11.976 }, 00:23:11.976 "method": "bdev_nvme_attach_controller" 00:23:11.976 },{ 00:23:11.976 "params": { 00:23:11.976 "name": "Nvme3", 00:23:11.976 "trtype": "tcp", 00:23:11.976 "traddr": "10.0.0.2", 00:23:11.976 "adrfam": "ipv4", 00:23:11.976 "trsvcid": "4420", 00:23:11.976 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:11.976 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:11.976 "hdgst": false, 00:23:11.976 "ddgst": false 00:23:11.976 }, 00:23:11.976 "method": "bdev_nvme_attach_controller" 00:23:11.976 },{ 00:23:11.976 "params": { 00:23:11.976 "name": "Nvme4", 00:23:11.977 "trtype": "tcp", 00:23:11.977 "traddr": "10.0.0.2", 00:23:11.977 "adrfam": "ipv4", 00:23:11.977 "trsvcid": "4420", 00:23:11.977 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:11.977 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:11.977 "hdgst": false, 00:23:11.977 "ddgst": false 00:23:11.977 }, 00:23:11.977 "method": "bdev_nvme_attach_controller" 00:23:11.977 },{ 00:23:11.977 "params": { 00:23:11.977 "name": "Nvme5", 00:23:11.977 "trtype": "tcp", 00:23:11.977 "traddr": "10.0.0.2", 00:23:11.977 "adrfam": "ipv4", 00:23:11.977 "trsvcid": "4420", 00:23:11.977 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:11.977 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:11.977 "hdgst": false, 00:23:11.977 "ddgst": false 00:23:11.977 }, 00:23:11.977 "method": "bdev_nvme_attach_controller" 00:23:11.977 },{ 00:23:11.977 "params": { 00:23:11.977 "name": "Nvme6", 00:23:11.977 "trtype": "tcp", 00:23:11.977 "traddr": "10.0.0.2", 00:23:11.977 "adrfam": "ipv4", 00:23:11.977 "trsvcid": "4420", 00:23:11.977 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:11.977 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:11.977 "hdgst": false, 00:23:11.977 "ddgst": false 00:23:11.977 }, 00:23:11.977 "method": "bdev_nvme_attach_controller" 00:23:11.977 },{ 00:23:11.977 "params": { 00:23:11.977 "name": "Nvme7", 00:23:11.977 "trtype": "tcp", 00:23:11.977 "traddr": "10.0.0.2", 00:23:11.977 "adrfam": "ipv4", 00:23:11.977 "trsvcid": "4420", 00:23:11.977 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:11.977 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:11.977 "hdgst": false, 00:23:11.977 "ddgst": false 00:23:11.977 }, 00:23:11.977 "method": "bdev_nvme_attach_controller" 00:23:11.977 },{ 00:23:11.977 "params": { 00:23:11.977 "name": "Nvme8", 00:23:11.977 "trtype": "tcp", 00:23:11.977 "traddr": "10.0.0.2", 00:23:11.977 "adrfam": "ipv4", 00:23:11.977 "trsvcid": "4420", 00:23:11.977 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:11.977 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:11.977 "hdgst": false, 00:23:11.977 "ddgst": false 00:23:11.977 }, 00:23:11.977 "method": "bdev_nvme_attach_controller" 00:23:11.977 },{ 00:23:11.977 "params": { 00:23:11.977 "name": "Nvme9", 00:23:11.977 "trtype": "tcp", 00:23:11.977 "traddr": "10.0.0.2", 00:23:11.977 "adrfam": "ipv4", 00:23:11.977 "trsvcid": "4420", 00:23:11.977 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:11.977 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:11.977 "hdgst": false, 00:23:11.977 "ddgst": false 00:23:11.977 }, 00:23:11.977 "method": "bdev_nvme_attach_controller" 00:23:11.977 },{ 00:23:11.977 "params": { 00:23:11.977 "name": "Nvme10", 00:23:11.977 "trtype": "tcp", 00:23:11.977 "traddr": "10.0.0.2", 00:23:11.977 "adrfam": "ipv4", 00:23:11.977 "trsvcid": "4420", 00:23:11.977 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:11.977 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:11.977 "hdgst": false, 00:23:11.977 "ddgst": false 00:23:11.977 }, 00:23:11.977 "method": "bdev_nvme_attach_controller" 00:23:11.977 }' 00:23:11.977 [2024-07-15 20:18:09.174843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.977 [2024-07-15 20:18:09.239525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.360 Running I/O for 1 seconds... 00:23:14.302 00:23:14.302 Latency(us) 00:23:14.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.302 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.302 Verification LBA range: start 0x0 length 0x400 00:23:14.302 Nvme1n1 : 1.13 226.20 14.14 0.00 0.00 279594.03 21080.75 281367.89 00:23:14.302 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.302 Verification LBA range: start 0x0 length 0x400 00:23:14.302 Nvme2n1 : 1.04 184.01 11.50 0.00 0.00 336771.98 22828.37 270882.13 00:23:14.302 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.302 Verification LBA range: start 0x0 length 0x400 00:23:14.302 Nvme3n1 : 1.04 184.43 11.53 0.00 0.00 330834.49 42161.49 253405.87 00:23:14.302 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.302 Verification LBA range: start 0x0 length 0x400 00:23:14.302 Nvme4n1 : 1.07 238.51 14.91 0.00 0.00 251312.00 22282.24 256901.12 00:23:14.302 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.302 Verification LBA range: start 0x0 length 0x400 00:23:14.302 Nvme5n1 : 1.16 275.17 17.20 0.00 0.00 215010.65 20971.52 277872.64 00:23:14.302 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.302 Verification LBA range: start 0x0 length 0x400 00:23:14.302 Nvme6n1 : 1.15 222.85 13.93 0.00 0.00 260684.37 23483.73 262144.00 00:23:14.302 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.302 Verification LBA range: start 0x0 length 0x400 00:23:14.302 Nvme7n1 : 1.14 223.69 13.98 0.00 0.00 254752.85 20425.39 258648.75 00:23:14.302 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.302 Verification LBA range: start 0x0 length 0x400 00:23:14.302 Nvme8n1 : 1.20 266.75 16.67 0.00 0.00 211052.03 21954.56 242920.11 00:23:14.302 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.302 Verification LBA range: start 0x0 length 0x400 00:23:14.302 Nvme9n1 : 1.16 220.89 13.81 0.00 0.00 249179.95 21080.75 277872.64 00:23:14.302 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.302 Verification LBA range: start 0x0 length 0x400 00:23:14.302 Nvme10n1 : 1.21 314.94 19.68 0.00 0.00 172734.18 10048.85 253405.87 00:23:14.302 =================================================================================================================== 00:23:14.303 Total : 2357.44 147.34 0.00 0.00 246522.30 10048.85 281367.89 00:23:14.563 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:14.563 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:14.563 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:14.563 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:14.563 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:14.563 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:14.563 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:14.563 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:14.563 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:14.563 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:14.563 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:14.563 rmmod nvme_tcp 00:23:14.563 rmmod nvme_fabrics 00:23:14.564 rmmod nvme_keyring 00:23:14.564 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:14.564 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:14.564 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:14.564 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1061324 ']' 00:23:14.564 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1061324 00:23:14.564 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1061324 ']' 00:23:14.564 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1061324 00:23:14.564 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:23:14.564 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:14.564 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1061324 00:23:14.564 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:14.564 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:14.564 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1061324' 00:23:14.564 killing process with pid 1061324 00:23:14.564 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1061324 00:23:14.564 20:18:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1061324 00:23:14.824 20:18:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:14.824 20:18:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:14.824 20:18:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:14.824 20:18:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:14.824 20:18:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:14.824 20:18:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.824 20:18:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.824 20:18:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:17.370 00:23:17.370 real 0m16.501s 00:23:17.370 user 0m33.733s 00:23:17.370 sys 0m6.500s 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:17.370 ************************************ 00:23:17.370 END TEST nvmf_shutdown_tc1 00:23:17.370 ************************************ 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:17.370 ************************************ 00:23:17.370 START TEST nvmf_shutdown_tc2 00:23:17.370 ************************************ 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:17.370 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:17.370 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:17.371 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:17.371 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:17.371 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:17.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:23:17.371 00:23:17.371 --- 10.0.0.2 ping statistics --- 00:23:17.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.371 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:23:17.371 00:23:17.371 --- 10.0.0.1 ping statistics --- 00:23:17.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.371 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1063655 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1063655 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1063655 ']' 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.371 20:18:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:17.371 [2024-07-15 20:18:14.766580] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:23:17.371 [2024-07-15 20:18:14.766633] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.371 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.632 [2024-07-15 20:18:14.850571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:17.632 [2024-07-15 20:18:14.911874] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.632 [2024-07-15 20:18:14.911914] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.632 [2024-07-15 20:18:14.911919] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.632 [2024-07-15 20:18:14.911924] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.632 [2024-07-15 20:18:14.911927] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.632 [2024-07-15 20:18:14.912093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.632 [2024-07-15 20:18:14.912252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.632 [2024-07-15 20:18:14.912545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.632 [2024-07-15 20:18:14.912545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.204 [2024-07-15 20:18:15.589470] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.204 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.464 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.464 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.464 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.464 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.464 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.464 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.464 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:18.464 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.464 20:18:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.464 Malloc1 00:23:18.464 [2024-07-15 20:18:15.684230] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.464 Malloc2 00:23:18.464 Malloc3 00:23:18.464 Malloc4 00:23:18.464 Malloc5 00:23:18.464 Malloc6 00:23:18.464 Malloc7 00:23:18.724 Malloc8 00:23:18.724 Malloc9 00:23:18.724 Malloc10 00:23:18.724 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.724 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:18.724 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:18.724 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.724 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1064038 00:23:18.724 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1064038 /var/tmp/bdevperf.sock 00:23:18.724 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1064038 ']' 00:23:18.724 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.724 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:18.724 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.724 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:18.724 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:18.724 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:18.724 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.724 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:18.724 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:18.724 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.724 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.724 { 00:23:18.724 "params": { 00:23:18.724 "name": "Nvme$subsystem", 00:23:18.724 "trtype": "$TEST_TRANSPORT", 00:23:18.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.724 "adrfam": "ipv4", 00:23:18.724 "trsvcid": "$NVMF_PORT", 00:23:18.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.724 "hdgst": ${hdgst:-false}, 00:23:18.725 "ddgst": ${ddgst:-false} 00:23:18.725 }, 00:23:18.725 "method": "bdev_nvme_attach_controller" 00:23:18.725 } 00:23:18.725 EOF 00:23:18.725 )") 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.725 { 00:23:18.725 "params": { 00:23:18.725 "name": "Nvme$subsystem", 00:23:18.725 "trtype": "$TEST_TRANSPORT", 00:23:18.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.725 "adrfam": "ipv4", 00:23:18.725 "trsvcid": "$NVMF_PORT", 00:23:18.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.725 "hdgst": ${hdgst:-false}, 00:23:18.725 "ddgst": ${ddgst:-false} 00:23:18.725 }, 00:23:18.725 "method": "bdev_nvme_attach_controller" 00:23:18.725 } 00:23:18.725 EOF 00:23:18.725 )") 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.725 { 00:23:18.725 "params": { 00:23:18.725 "name": "Nvme$subsystem", 00:23:18.725 "trtype": "$TEST_TRANSPORT", 00:23:18.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.725 "adrfam": "ipv4", 00:23:18.725 "trsvcid": "$NVMF_PORT", 00:23:18.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.725 "hdgst": ${hdgst:-false}, 00:23:18.725 "ddgst": ${ddgst:-false} 00:23:18.725 }, 00:23:18.725 "method": "bdev_nvme_attach_controller" 00:23:18.725 } 00:23:18.725 EOF 00:23:18.725 )") 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.725 { 00:23:18.725 "params": { 00:23:18.725 "name": "Nvme$subsystem", 00:23:18.725 "trtype": "$TEST_TRANSPORT", 00:23:18.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.725 "adrfam": "ipv4", 00:23:18.725 "trsvcid": "$NVMF_PORT", 00:23:18.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.725 "hdgst": ${hdgst:-false}, 00:23:18.725 "ddgst": ${ddgst:-false} 00:23:18.725 }, 00:23:18.725 "method": "bdev_nvme_attach_controller" 00:23:18.725 } 00:23:18.725 EOF 00:23:18.725 )") 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.725 { 00:23:18.725 "params": { 00:23:18.725 "name": "Nvme$subsystem", 00:23:18.725 "trtype": "$TEST_TRANSPORT", 00:23:18.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.725 "adrfam": "ipv4", 00:23:18.725 "trsvcid": "$NVMF_PORT", 00:23:18.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.725 "hdgst": ${hdgst:-false}, 00:23:18.725 "ddgst": ${ddgst:-false} 00:23:18.725 }, 00:23:18.725 "method": "bdev_nvme_attach_controller" 00:23:18.725 } 00:23:18.725 EOF 00:23:18.725 )") 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.725 { 00:23:18.725 "params": { 00:23:18.725 "name": "Nvme$subsystem", 00:23:18.725 "trtype": "$TEST_TRANSPORT", 00:23:18.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.725 "adrfam": "ipv4", 00:23:18.725 "trsvcid": "$NVMF_PORT", 00:23:18.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.725 "hdgst": ${hdgst:-false}, 00:23:18.725 "ddgst": ${ddgst:-false} 00:23:18.725 }, 00:23:18.725 "method": "bdev_nvme_attach_controller" 00:23:18.725 } 00:23:18.725 EOF 00:23:18.725 )") 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.725 [2024-07-15 20:18:16.129115] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:23:18.725 [2024-07-15 20:18:16.129173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1064038 ] 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.725 { 00:23:18.725 "params": { 00:23:18.725 "name": "Nvme$subsystem", 00:23:18.725 "trtype": "$TEST_TRANSPORT", 00:23:18.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.725 "adrfam": "ipv4", 00:23:18.725 "trsvcid": "$NVMF_PORT", 00:23:18.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.725 "hdgst": ${hdgst:-false}, 00:23:18.725 "ddgst": ${ddgst:-false} 00:23:18.725 }, 00:23:18.725 "method": "bdev_nvme_attach_controller" 00:23:18.725 } 00:23:18.725 EOF 00:23:18.725 )") 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.725 { 00:23:18.725 "params": { 00:23:18.725 "name": "Nvme$subsystem", 00:23:18.725 "trtype": "$TEST_TRANSPORT", 00:23:18.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.725 "adrfam": "ipv4", 00:23:18.725 "trsvcid": "$NVMF_PORT", 00:23:18.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.725 "hdgst": ${hdgst:-false}, 00:23:18.725 "ddgst": ${ddgst:-false} 00:23:18.725 }, 00:23:18.725 "method": "bdev_nvme_attach_controller" 00:23:18.725 } 00:23:18.725 EOF 00:23:18.725 )") 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.725 { 00:23:18.725 "params": { 00:23:18.725 "name": "Nvme$subsystem", 00:23:18.725 "trtype": "$TEST_TRANSPORT", 00:23:18.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.725 "adrfam": "ipv4", 00:23:18.725 "trsvcid": "$NVMF_PORT", 00:23:18.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.725 "hdgst": ${hdgst:-false}, 00:23:18.725 "ddgst": ${ddgst:-false} 00:23:18.725 }, 00:23:18.725 "method": "bdev_nvme_attach_controller" 00:23:18.725 } 00:23:18.725 EOF 00:23:18.725 )") 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.725 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.725 { 00:23:18.725 "params": { 00:23:18.725 "name": "Nvme$subsystem", 00:23:18.725 "trtype": "$TEST_TRANSPORT", 00:23:18.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.725 "adrfam": "ipv4", 00:23:18.725 "trsvcid": "$NVMF_PORT", 00:23:18.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.725 "hdgst": ${hdgst:-false}, 00:23:18.725 "ddgst": ${ddgst:-false} 00:23:18.725 }, 00:23:18.726 "method": "bdev_nvme_attach_controller" 00:23:18.726 } 00:23:18.726 EOF 00:23:18.726 )") 00:23:18.726 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.726 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.986 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:18.986 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:18.986 20:18:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:18.986 "params": { 00:23:18.986 "name": "Nvme1", 00:23:18.986 "trtype": "tcp", 00:23:18.986 "traddr": "10.0.0.2", 00:23:18.986 "adrfam": "ipv4", 00:23:18.986 "trsvcid": "4420", 00:23:18.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.986 "hdgst": false, 00:23:18.986 "ddgst": false 00:23:18.986 }, 00:23:18.986 "method": "bdev_nvme_attach_controller" 00:23:18.986 },{ 00:23:18.986 "params": { 00:23:18.986 "name": "Nvme2", 00:23:18.986 "trtype": "tcp", 00:23:18.986 "traddr": "10.0.0.2", 00:23:18.986 "adrfam": "ipv4", 00:23:18.986 "trsvcid": "4420", 00:23:18.986 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:18.986 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:18.986 "hdgst": false, 00:23:18.986 "ddgst": false 00:23:18.986 }, 00:23:18.986 "method": "bdev_nvme_attach_controller" 00:23:18.986 },{ 00:23:18.986 "params": { 00:23:18.986 "name": "Nvme3", 00:23:18.986 "trtype": "tcp", 00:23:18.986 "traddr": "10.0.0.2", 00:23:18.986 "adrfam": "ipv4", 00:23:18.986 "trsvcid": "4420", 00:23:18.986 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:18.986 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:18.986 "hdgst": false, 00:23:18.986 "ddgst": false 00:23:18.986 }, 00:23:18.986 "method": "bdev_nvme_attach_controller" 00:23:18.986 },{ 00:23:18.986 "params": { 00:23:18.986 "name": "Nvme4", 00:23:18.986 "trtype": "tcp", 00:23:18.986 "traddr": "10.0.0.2", 00:23:18.986 "adrfam": "ipv4", 00:23:18.986 "trsvcid": "4420", 00:23:18.986 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:18.986 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:18.986 "hdgst": false, 00:23:18.986 "ddgst": false 00:23:18.986 }, 00:23:18.986 "method": "bdev_nvme_attach_controller" 00:23:18.986 },{ 00:23:18.986 "params": { 00:23:18.986 "name": "Nvme5", 00:23:18.986 "trtype": "tcp", 00:23:18.986 "traddr": "10.0.0.2", 00:23:18.986 "adrfam": "ipv4", 00:23:18.986 "trsvcid": "4420", 00:23:18.986 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:18.986 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:18.986 "hdgst": false, 00:23:18.986 "ddgst": false 00:23:18.986 }, 00:23:18.986 "method": "bdev_nvme_attach_controller" 00:23:18.986 },{ 00:23:18.986 "params": { 00:23:18.986 "name": "Nvme6", 00:23:18.986 "trtype": "tcp", 00:23:18.986 "traddr": "10.0.0.2", 00:23:18.986 "adrfam": "ipv4", 00:23:18.986 "trsvcid": "4420", 00:23:18.986 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:18.986 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:18.986 "hdgst": false, 00:23:18.986 "ddgst": false 00:23:18.986 }, 00:23:18.986 "method": "bdev_nvme_attach_controller" 00:23:18.986 },{ 00:23:18.986 "params": { 00:23:18.987 "name": "Nvme7", 00:23:18.987 "trtype": "tcp", 00:23:18.987 "traddr": "10.0.0.2", 00:23:18.987 "adrfam": "ipv4", 00:23:18.987 "trsvcid": "4420", 00:23:18.987 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:18.987 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:18.987 "hdgst": false, 00:23:18.987 "ddgst": false 00:23:18.987 }, 00:23:18.987 "method": "bdev_nvme_attach_controller" 00:23:18.987 },{ 00:23:18.987 "params": { 00:23:18.987 "name": "Nvme8", 00:23:18.987 "trtype": "tcp", 00:23:18.987 "traddr": "10.0.0.2", 00:23:18.987 "adrfam": "ipv4", 00:23:18.987 "trsvcid": "4420", 00:23:18.987 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:18.987 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:18.987 "hdgst": false, 00:23:18.987 "ddgst": false 00:23:18.987 }, 00:23:18.987 "method": "bdev_nvme_attach_controller" 00:23:18.987 },{ 00:23:18.987 "params": { 00:23:18.987 "name": "Nvme9", 00:23:18.987 "trtype": "tcp", 00:23:18.987 "traddr": "10.0.0.2", 00:23:18.987 "adrfam": "ipv4", 00:23:18.987 "trsvcid": "4420", 00:23:18.987 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:18.987 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:18.987 "hdgst": false, 00:23:18.987 "ddgst": false 00:23:18.987 }, 00:23:18.987 "method": "bdev_nvme_attach_controller" 00:23:18.987 },{ 00:23:18.987 "params": { 00:23:18.987 "name": "Nvme10", 00:23:18.987 "trtype": "tcp", 00:23:18.987 "traddr": "10.0.0.2", 00:23:18.987 "adrfam": "ipv4", 00:23:18.987 "trsvcid": "4420", 00:23:18.987 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:18.987 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:18.987 "hdgst": false, 00:23:18.987 "ddgst": false 00:23:18.987 }, 00:23:18.987 "method": "bdev_nvme_attach_controller" 00:23:18.987 }' 00:23:18.987 [2024-07-15 20:18:16.188569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.987 [2024-07-15 20:18:16.253733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.370 Running I/O for 10 seconds... 00:23:20.371 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.371 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:20.371 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:20.371 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.371 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:20.631 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.631 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:20.631 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:20.631 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:20.631 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:20.631 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:20.631 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:20.631 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:20.631 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:20.631 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:20.631 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.631 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:20.631 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.631 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:20.631 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:20.631 20:18:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:20.891 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:20.891 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:20.891 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:20.891 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:20.891 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.891 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:20.891 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.891 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:20.891 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:20.891 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:21.152 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:21.152 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:21.152 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:21.152 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:21.152 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.152 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:21.152 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.152 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:21.152 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:21.152 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:21.152 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:21.152 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:21.152 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1064038 00:23:21.152 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1064038 ']' 00:23:21.152 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1064038 00:23:21.152 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:21.152 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.152 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1064038 00:23:21.412 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:21.412 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:21.412 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1064038' 00:23:21.412 killing process with pid 1064038 00:23:21.412 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1064038 00:23:21.412 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1064038 00:23:21.412 Received shutdown signal, test time was about 0.978678 seconds 00:23:21.412 00:23:21.412 Latency(us) 00:23:21.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.412 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.412 Verification LBA range: start 0x0 length 0x400 00:23:21.412 Nvme1n1 : 0.97 263.42 16.46 0.00 0.00 240200.11 22719.15 242920.11 00:23:21.412 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.412 Verification LBA range: start 0x0 length 0x400 00:23:21.412 Nvme2n1 : 0.95 203.02 12.69 0.00 0.00 305177.60 22828.37 256901.12 00:23:21.412 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.412 Verification LBA range: start 0x0 length 0x400 00:23:21.412 Nvme3n1 : 0.93 206.63 12.91 0.00 0.00 293484.94 23156.05 248162.99 00:23:21.412 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.412 Verification LBA range: start 0x0 length 0x400 00:23:21.412 Nvme4n1 : 0.97 264.25 16.52 0.00 0.00 225095.47 23592.96 246415.36 00:23:21.412 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.412 Verification LBA range: start 0x0 length 0x400 00:23:21.412 Nvme5n1 : 0.98 259.77 16.24 0.00 0.00 223973.58 18568.53 246415.36 00:23:21.412 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.412 Verification LBA range: start 0x0 length 0x400 00:23:21.412 Nvme6n1 : 0.96 267.21 16.70 0.00 0.00 212918.19 23265.28 249910.61 00:23:21.412 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.412 Verification LBA range: start 0x0 length 0x400 00:23:21.412 Nvme7n1 : 0.94 204.28 12.77 0.00 0.00 271387.88 22173.01 251658.24 00:23:21.412 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.412 Verification LBA range: start 0x0 length 0x400 00:23:21.412 Nvme8n1 : 0.97 270.28 16.89 0.00 0.00 200987.08 2293.76 221948.59 00:23:21.412 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.412 Verification LBA range: start 0x0 length 0x400 00:23:21.412 Nvme9n1 : 0.95 268.35 16.77 0.00 0.00 197604.48 15947.09 237677.23 00:23:21.412 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.412 Verification LBA range: start 0x0 length 0x400 00:23:21.412 Nvme10n1 : 0.95 201.93 12.62 0.00 0.00 255911.54 23811.41 265639.25 00:23:21.412 =================================================================================================================== 00:23:21.412 Total : 2409.13 150.57 0.00 0.00 238292.62 2293.76 265639.25 00:23:21.412 20:18:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1063655 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:22.797 rmmod nvme_tcp 00:23:22.797 rmmod nvme_fabrics 00:23:22.797 rmmod nvme_keyring 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1063655 ']' 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1063655 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1063655 ']' 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1063655 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1063655 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1063655' 00:23:22.797 killing process with pid 1063655 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1063655 00:23:22.797 20:18:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1063655 00:23:22.797 20:18:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:22.797 20:18:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:22.797 20:18:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:22.797 20:18:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.797 20:18:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:22.797 20:18:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.797 20:18:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.797 20:18:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:25.347 00:23:25.347 real 0m7.927s 00:23:25.347 user 0m23.846s 00:23:25.347 sys 0m1.315s 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:25.347 ************************************ 00:23:25.347 END TEST nvmf_shutdown_tc2 00:23:25.347 ************************************ 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:25.347 ************************************ 00:23:25.347 START TEST nvmf_shutdown_tc3 00:23:25.347 ************************************ 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.347 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:25.348 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:25.348 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:25.348 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:25.348 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:25.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:23:25.348 00:23:25.348 --- 10.0.0.2 ping statistics --- 00:23:25.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.348 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:23:25.348 00:23:25.348 --- 10.0.0.1 ping statistics --- 00:23:25.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.348 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1065490 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1065490 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1065490 ']' 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.348 20:18:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:25.609 [2024-07-15 20:18:22.795732] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:23:25.609 [2024-07-15 20:18:22.795782] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.609 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.609 [2024-07-15 20:18:22.853558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.609 [2024-07-15 20:18:22.908081] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.609 [2024-07-15 20:18:22.908113] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.609 [2024-07-15 20:18:22.908118] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.609 [2024-07-15 20:18:22.908127] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.609 [2024-07-15 20:18:22.908131] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.609 [2024-07-15 20:18:22.908312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.609 [2024-07-15 20:18:22.908474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.609 [2024-07-15 20:18:22.908635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.610 [2024-07-15 20:18:22.908637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:26.181 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.181 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:26.181 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:26.181 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:26.181 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.442 [2024-07-15 20:18:23.628692] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.442 20:18:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.442 Malloc1 00:23:26.442 [2024-07-15 20:18:23.727487] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.442 Malloc2 00:23:26.442 Malloc3 00:23:26.442 Malloc4 00:23:26.442 Malloc5 00:23:26.703 Malloc6 00:23:26.703 Malloc7 00:23:26.703 Malloc8 00:23:26.703 Malloc9 00:23:26.703 Malloc10 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1065820 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1065820 /var/tmp/bdevperf.sock 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1065820 ']' 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.703 { 00:23:26.703 "params": { 00:23:26.703 "name": "Nvme$subsystem", 00:23:26.703 "trtype": "$TEST_TRANSPORT", 00:23:26.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.703 "adrfam": "ipv4", 00:23:26.703 "trsvcid": "$NVMF_PORT", 00:23:26.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.703 "hdgst": ${hdgst:-false}, 00:23:26.703 "ddgst": ${ddgst:-false} 00:23:26.703 }, 00:23:26.703 "method": "bdev_nvme_attach_controller" 00:23:26.703 } 00:23:26.703 EOF 00:23:26.703 )") 00:23:26.703 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.964 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.964 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.964 { 00:23:26.964 "params": { 00:23:26.964 "name": "Nvme$subsystem", 00:23:26.964 "trtype": "$TEST_TRANSPORT", 00:23:26.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.964 "adrfam": "ipv4", 00:23:26.964 "trsvcid": "$NVMF_PORT", 00:23:26.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.965 "hdgst": ${hdgst:-false}, 00:23:26.965 "ddgst": ${ddgst:-false} 00:23:26.965 }, 00:23:26.965 "method": "bdev_nvme_attach_controller" 00:23:26.965 } 00:23:26.965 EOF 00:23:26.965 )") 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.965 { 00:23:26.965 "params": { 00:23:26.965 "name": "Nvme$subsystem", 00:23:26.965 "trtype": "$TEST_TRANSPORT", 00:23:26.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.965 "adrfam": "ipv4", 00:23:26.965 "trsvcid": "$NVMF_PORT", 00:23:26.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.965 "hdgst": ${hdgst:-false}, 00:23:26.965 "ddgst": ${ddgst:-false} 00:23:26.965 }, 00:23:26.965 "method": "bdev_nvme_attach_controller" 00:23:26.965 } 00:23:26.965 EOF 00:23:26.965 )") 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.965 { 00:23:26.965 "params": { 00:23:26.965 "name": "Nvme$subsystem", 00:23:26.965 "trtype": "$TEST_TRANSPORT", 00:23:26.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.965 "adrfam": "ipv4", 00:23:26.965 "trsvcid": "$NVMF_PORT", 00:23:26.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.965 "hdgst": ${hdgst:-false}, 00:23:26.965 "ddgst": ${ddgst:-false} 00:23:26.965 }, 00:23:26.965 "method": "bdev_nvme_attach_controller" 00:23:26.965 } 00:23:26.965 EOF 00:23:26.965 )") 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.965 { 00:23:26.965 "params": { 00:23:26.965 "name": "Nvme$subsystem", 00:23:26.965 "trtype": "$TEST_TRANSPORT", 00:23:26.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.965 "adrfam": "ipv4", 00:23:26.965 "trsvcid": "$NVMF_PORT", 00:23:26.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.965 "hdgst": ${hdgst:-false}, 00:23:26.965 "ddgst": ${ddgst:-false} 00:23:26.965 }, 00:23:26.965 "method": "bdev_nvme_attach_controller" 00:23:26.965 } 00:23:26.965 EOF 00:23:26.965 )") 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.965 { 00:23:26.965 "params": { 00:23:26.965 "name": "Nvme$subsystem", 00:23:26.965 "trtype": "$TEST_TRANSPORT", 00:23:26.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.965 "adrfam": "ipv4", 00:23:26.965 "trsvcid": "$NVMF_PORT", 00:23:26.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.965 "hdgst": ${hdgst:-false}, 00:23:26.965 "ddgst": ${ddgst:-false} 00:23:26.965 }, 00:23:26.965 "method": "bdev_nvme_attach_controller" 00:23:26.965 } 00:23:26.965 EOF 00:23:26.965 )") 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.965 { 00:23:26.965 "params": { 00:23:26.965 "name": "Nvme$subsystem", 00:23:26.965 "trtype": "$TEST_TRANSPORT", 00:23:26.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.965 "adrfam": "ipv4", 00:23:26.965 "trsvcid": "$NVMF_PORT", 00:23:26.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.965 "hdgst": ${hdgst:-false}, 00:23:26.965 "ddgst": ${ddgst:-false} 00:23:26.965 }, 00:23:26.965 "method": "bdev_nvme_attach_controller" 00:23:26.965 } 00:23:26.965 EOF 00:23:26.965 )") 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.965 [2024-07-15 20:18:24.183679] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:23:26.965 [2024-07-15 20:18:24.183731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1065820 ] 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.965 { 00:23:26.965 "params": { 00:23:26.965 "name": "Nvme$subsystem", 00:23:26.965 "trtype": "$TEST_TRANSPORT", 00:23:26.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.965 "adrfam": "ipv4", 00:23:26.965 "trsvcid": "$NVMF_PORT", 00:23:26.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.965 "hdgst": ${hdgst:-false}, 00:23:26.965 "ddgst": ${ddgst:-false} 00:23:26.965 }, 00:23:26.965 "method": "bdev_nvme_attach_controller" 00:23:26.965 } 00:23:26.965 EOF 00:23:26.965 )") 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.965 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.966 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.966 { 00:23:26.966 "params": { 00:23:26.966 "name": "Nvme$subsystem", 00:23:26.966 "trtype": "$TEST_TRANSPORT", 00:23:26.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.966 "adrfam": "ipv4", 00:23:26.966 "trsvcid": "$NVMF_PORT", 00:23:26.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.966 "hdgst": ${hdgst:-false}, 00:23:26.966 "ddgst": ${ddgst:-false} 00:23:26.966 }, 00:23:26.966 "method": "bdev_nvme_attach_controller" 00:23:26.966 } 00:23:26.966 EOF 00:23:26.966 )") 00:23:26.966 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.966 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.966 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.966 { 00:23:26.966 "params": { 00:23:26.966 "name": "Nvme$subsystem", 00:23:26.966 "trtype": "$TEST_TRANSPORT", 00:23:26.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.966 "adrfam": "ipv4", 00:23:26.966 "trsvcid": "$NVMF_PORT", 00:23:26.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.966 "hdgst": ${hdgst:-false}, 00:23:26.966 "ddgst": ${ddgst:-false} 00:23:26.966 }, 00:23:26.966 "method": "bdev_nvme_attach_controller" 00:23:26.966 } 00:23:26.966 EOF 00:23:26.966 )") 00:23:26.966 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.966 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:26.966 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.966 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:26.966 20:18:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:26.966 "params": { 00:23:26.966 "name": "Nvme1", 00:23:26.966 "trtype": "tcp", 00:23:26.966 "traddr": "10.0.0.2", 00:23:26.966 "adrfam": "ipv4", 00:23:26.966 "trsvcid": "4420", 00:23:26.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.966 "hdgst": false, 00:23:26.966 "ddgst": false 00:23:26.966 }, 00:23:26.966 "method": "bdev_nvme_attach_controller" 00:23:26.966 },{ 00:23:26.966 "params": { 00:23:26.966 "name": "Nvme2", 00:23:26.966 "trtype": "tcp", 00:23:26.966 "traddr": "10.0.0.2", 00:23:26.966 "adrfam": "ipv4", 00:23:26.966 "trsvcid": "4420", 00:23:26.966 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:26.966 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:26.966 "hdgst": false, 00:23:26.966 "ddgst": false 00:23:26.966 }, 00:23:26.966 "method": "bdev_nvme_attach_controller" 00:23:26.966 },{ 00:23:26.966 "params": { 00:23:26.966 "name": "Nvme3", 00:23:26.966 "trtype": "tcp", 00:23:26.966 "traddr": "10.0.0.2", 00:23:26.966 "adrfam": "ipv4", 00:23:26.966 "trsvcid": "4420", 00:23:26.966 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:26.966 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:26.966 "hdgst": false, 00:23:26.966 "ddgst": false 00:23:26.966 }, 00:23:26.966 "method": "bdev_nvme_attach_controller" 00:23:26.966 },{ 00:23:26.966 "params": { 00:23:26.966 "name": "Nvme4", 00:23:26.966 "trtype": "tcp", 00:23:26.966 "traddr": "10.0.0.2", 00:23:26.966 "adrfam": "ipv4", 00:23:26.966 "trsvcid": "4420", 00:23:26.966 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:26.966 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:26.966 "hdgst": false, 00:23:26.966 "ddgst": false 00:23:26.966 }, 00:23:26.966 "method": "bdev_nvme_attach_controller" 00:23:26.966 },{ 00:23:26.966 "params": { 00:23:26.966 "name": "Nvme5", 00:23:26.966 "trtype": "tcp", 00:23:26.966 "traddr": "10.0.0.2", 00:23:26.966 "adrfam": "ipv4", 00:23:26.966 "trsvcid": "4420", 00:23:26.966 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:26.966 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:26.966 "hdgst": false, 00:23:26.966 "ddgst": false 00:23:26.966 }, 00:23:26.966 "method": "bdev_nvme_attach_controller" 00:23:26.966 },{ 00:23:26.966 "params": { 00:23:26.966 "name": "Nvme6", 00:23:26.966 "trtype": "tcp", 00:23:26.966 "traddr": "10.0.0.2", 00:23:26.966 "adrfam": "ipv4", 00:23:26.966 "trsvcid": "4420", 00:23:26.966 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:26.966 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:26.966 "hdgst": false, 00:23:26.966 "ddgst": false 00:23:26.966 }, 00:23:26.966 "method": "bdev_nvme_attach_controller" 00:23:26.966 },{ 00:23:26.966 "params": { 00:23:26.966 "name": "Nvme7", 00:23:26.966 "trtype": "tcp", 00:23:26.966 "traddr": "10.0.0.2", 00:23:26.966 "adrfam": "ipv4", 00:23:26.966 "trsvcid": "4420", 00:23:26.966 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:26.966 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:26.966 "hdgst": false, 00:23:26.966 "ddgst": false 00:23:26.966 }, 00:23:26.966 "method": "bdev_nvme_attach_controller" 00:23:26.966 },{ 00:23:26.966 "params": { 00:23:26.966 "name": "Nvme8", 00:23:26.966 "trtype": "tcp", 00:23:26.966 "traddr": "10.0.0.2", 00:23:26.966 "adrfam": "ipv4", 00:23:26.966 "trsvcid": "4420", 00:23:26.966 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:26.966 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:26.967 "hdgst": false, 00:23:26.967 "ddgst": false 00:23:26.967 }, 00:23:26.967 "method": "bdev_nvme_attach_controller" 00:23:26.967 },{ 00:23:26.967 "params": { 00:23:26.967 "name": "Nvme9", 00:23:26.967 "trtype": "tcp", 00:23:26.967 "traddr": "10.0.0.2", 00:23:26.967 "adrfam": "ipv4", 00:23:26.967 "trsvcid": "4420", 00:23:26.967 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:26.967 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:26.967 "hdgst": false, 00:23:26.967 "ddgst": false 00:23:26.967 }, 00:23:26.967 "method": "bdev_nvme_attach_controller" 00:23:26.967 },{ 00:23:26.967 "params": { 00:23:26.967 "name": "Nvme10", 00:23:26.967 "trtype": "tcp", 00:23:26.967 "traddr": "10.0.0.2", 00:23:26.967 "adrfam": "ipv4", 00:23:26.967 "trsvcid": "4420", 00:23:26.967 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:26.967 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:26.967 "hdgst": false, 00:23:26.967 "ddgst": false 00:23:26.967 }, 00:23:26.967 "method": "bdev_nvme_attach_controller" 00:23:26.967 }' 00:23:26.967 [2024-07-15 20:18:24.243586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.967 [2024-07-15 20:18:24.309448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.353 Running I/O for 10 seconds... 00:23:28.353 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:28.353 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:28.353 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:28.353 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.353 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.614 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.614 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.614 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:28.614 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:28.614 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:28.614 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:28.614 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:28.614 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:28.614 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:28.614 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:28.614 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:28.614 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.614 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.614 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.614 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:28.614 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:28.614 20:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:28.875 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:28.875 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:28.875 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:28.875 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:28.875 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.875 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.875 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.875 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:28.875 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:28.875 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:29.136 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:29.136 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:29.136 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:29.136 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:29.136 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.136 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:29.136 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.412 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:29.412 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:29.412 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:29.412 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:29.412 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:29.412 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1065490 00:23:29.412 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1065490 ']' 00:23:29.412 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1065490 00:23:29.412 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:29.412 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.412 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1065490 00:23:29.412 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:29.412 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:29.412 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1065490' 00:23:29.412 killing process with pid 1065490 00:23:29.412 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1065490 00:23:29.412 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1065490 00:23:29.412 [2024-07-15 20:18:26.629393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318ae0 is same with the state(5) to be set 00:23:29.412 [2024-07-15 20:18:26.630257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with [2024-07-15 20:18:26.630636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:29.412 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:1[2024-07-15 20:18:26.630648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 the state(5) to be set 00:23:29.412 [2024-07-15 20:18:26.630656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.412 [2024-07-15 20:18:26.630656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.412 [2024-07-15 20:18:26.630667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with [2024-07-15 20:18:26.630667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1the state(5) to be set 00:23:29.412 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.412 [2024-07-15 20:18:26.630676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.412 [2024-07-15 20:18:26.630686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.412 [2024-07-15 20:18:26.630695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with [2024-07-15 20:18:26.630694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:29.412 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.412 [2024-07-15 20:18:26.630702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.412 [2024-07-15 20:18:26.630706] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with [2024-07-15 20:18:26.630706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1the state(5) to be set 00:23:29.412 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.412 [2024-07-15 20:18:26.630713] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.413 [2024-07-15 20:18:26.630718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.413 [2024-07-15 20:18:26.630730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.413 [2024-07-15 20:18:26.630735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630740] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.413 [2024-07-15 20:18:26.630745] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.413 [2024-07-15 20:18:26.630756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with [2024-07-15 20:18:26.630760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1the state(5) to be set 00:23:29.413 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.413 [2024-07-15 20:18:26.630768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.413 [2024-07-15 20:18:26.630773] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with [2024-07-15 20:18:26.630779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1the state(5) to be set 00:23:29.413 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.413 [2024-07-15 20:18:26.630786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.413 [2024-07-15 20:18:26.630791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.413 [2024-07-15 20:18:26.630801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:18:26.630807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.413 the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.413 [2024-07-15 20:18:26.630818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with [2024-07-15 20:18:26.630824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:29.413 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.413 [2024-07-15 20:18:26.630831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1[2024-07-15 20:18:26.630836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.413 the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.413 [2024-07-15 20:18:26.630848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.413 [2024-07-15 20:18:26.630858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.413 [2024-07-15 20:18:26.630863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.413 [2024-07-15 20:18:26.630875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:18:26.630880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.413 the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.413 [2024-07-15 20:18:26.630892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.413 [2024-07-15 20:18:26.630903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with [2024-07-15 20:18:26.630908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128the state(5) to be set 00:23:29.413 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.413 [2024-07-15 20:18:26.630915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.413 [2024-07-15 20:18:26.630920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.413 [2024-07-15 20:18:26.630932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.413 [2024-07-15 20:18:26.630938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630943] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.413 [2024-07-15 20:18:26.630947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630952] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with [2024-07-15 20:18:26.630952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:29.413 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.413 [2024-07-15 20:18:26.630959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.413 [2024-07-15 20:18:26.630967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with [2024-07-15 20:18:26.630972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:29.413 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.413 [2024-07-15 20:18:26.630979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128[2024-07-15 20:18:26.630984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.413 the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.630992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.413 [2024-07-15 20:18:26.630996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.631001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319c50 is same with the state(5) to be set 00:23:29.413 [2024-07-15 20:18:26.631002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.413 [2024-07-15 20:18:26.631010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.413 [2024-07-15 20:18:26.631019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.414 [2024-07-15 20:18:26.631435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.414 [2024-07-15 20:18:26.631492] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19f34b0 was disconnected and freed. reset controller. 00:23:29.414 [2024-07-15 20:18:26.631978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.631996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632020] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632061] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632073] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632082] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632087] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632100] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632104] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632128] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632138] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.414 [2024-07-15 20:18:26.632182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632210] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632219] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632224] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632232] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632260] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632274] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.632283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318f80 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633322] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633328] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633338] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633352] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633376] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633385] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633404] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633414] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633418] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633473] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633482] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633491] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633510] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633522] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633540] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633596] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633605] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.633609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319420 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.634418] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23198e0 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.634692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.634708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.634713] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.415 [2024-07-15 20:18:26.634718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controlle[2024-07-15 20:18:26.634750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with r 00:23:29.416 the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194f5d0 (9): Bad file descriptor 00:23:29.416 [2024-07-15 20:18:26.634780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634941] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634977] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.634995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.635000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.635004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.635008] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.635013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.635018] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.635023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.635027] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.635032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.635037] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.635096] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.635101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.635106] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.635111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.635115] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.635120] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.635128] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.635133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254ffb0 is same with the state(5) to be set 00:23:29.416 [2024-07-15 20:18:26.635704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-07-15 20:18:26.635725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.416 [2024-07-15 20:18:26.635737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-07-15 20:18:26.635744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.416 [2024-07-15 20:18:26.635754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-07-15 20:18:26.635761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.416 [2024-07-15 20:18:26.635770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-07-15 20:18:26.635777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.416 [2024-07-15 20:18:26.635786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.416 [2024-07-15 20:18:26.635793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.416 [2024-07-15 20:18:26.635802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.635809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.635818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.635824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.635838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.635845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.635854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.635861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.635870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.635877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.635886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.635893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.635902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.635909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.635917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.635924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.635933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.635940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.635949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.635956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.635965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.635973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.635969] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2550450 is same with the state(5) to be set 00:23:29.417 [2024-07-15 20:18:26.635982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.635990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.635999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with [2024-07-15 20:18:26.636440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:12the state(5) to be set 00:23:29.417 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 [2024-07-15 20:18:26.636449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with [2024-07-15 20:18:26.636449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:29.417 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.417 [2024-07-15 20:18:26.636457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.417 [2024-07-15 20:18:26.636461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:12[2024-07-15 20:18:26.636462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.417 the state(5) to be set 00:23:29.417 [2024-07-15 20:18:26.636469] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.636474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636479] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.636484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.636489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.636499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with [2024-07-15 20:18:26.636504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:29.418 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.636511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.636521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.636526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.636539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.636544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.636553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.636564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636569] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.636573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.418 [2024-07-15 20:18:26.636576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.636586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.637738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.638003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.638058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.638116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.638178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.638236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.638291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.638347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.638393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.638448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.638496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.638552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.638602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.638662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.638709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.638762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.638807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.638859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.638907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.638959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.639016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.639072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.639118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.639230] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1949530 was disconnected and freed. reset controller. 00:23:29.418 [2024-07-15 20:18:26.639363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.639375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.639388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.639395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.639443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.639492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.639543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.639595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.639646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.639695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.639752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.639803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.639855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.639904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.639960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.640011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.640067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.640115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.640199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.640258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.640314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.640363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.640414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.418 [2024-07-15 20:18:26.640468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.418 [2024-07-15 20:18:26.640519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.640568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.640620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.640670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.640722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.640772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.640825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.640872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.640925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.640976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.641028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.641074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.641136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.641183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.641234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.641281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.641333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.641382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.641436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.641483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.641534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.641580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.641634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.641685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.641735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.641783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.641835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.641880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.641932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.641983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.642036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.642081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.642149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.642191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.642242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.642289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.642341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.642388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.642448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.642496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.642548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.642594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.642645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.642695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.642749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.642796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.642848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.642894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.642953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.642998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.643049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.643095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.643169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.643217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.643273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.643319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.643372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.643419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.643478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.643530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.643583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.643630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.419 [2024-07-15 20:18:26.643680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.419 [2024-07-15 20:18:26.653032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653079] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653087] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653115] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653145] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653157] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653198] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653204] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.419 [2024-07-15 20:18:26.653210] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.653216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.653229] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.653235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.653241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.653247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.653253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.653258] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.653264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.653270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.653276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.653282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.653288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.653293] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25508f0 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654144] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654148] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654157] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654171] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654184] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654197] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654202] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654267] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654280] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654328] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654338] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654352] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654365] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654369] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654374] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654387] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654404] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.654417] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551250 is same with the state(5) to be set 00:23:29.420 [2024-07-15 20:18:26.660117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.421 [2024-07-15 20:18:26.660502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.660565] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19f3e70 was disconnected and freed. reset controller. 00:23:29.421 [2024-07-15 20:18:26.661119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.421 [2024-07-15 20:18:26.661148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x194f5d0 with addr=10.0.0.2, port=4420 00:23:29.421 [2024-07-15 20:18:26.661157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194f5d0 is same with the state(5) to be set 00:23:29.421 [2024-07-15 20:18:26.661196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.661215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.661231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.661247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.661262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b23990 is same with the state(5) to be set 00:23:29.421 [2024-07-15 20:18:26.661288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.661305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.661324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.661339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.661353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19721b0 is same with the state(5) to be set 00:23:29.421 [2024-07-15 20:18:26.661379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.661395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.661410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.661425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.661439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b11290 is same with the state(5) to be set 00:23:29.421 [2024-07-15 20:18:26.661462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.661479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.661494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.661509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.661523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1b210 is same with the state(5) to be set 00:23:29.421 [2024-07-15 20:18:26.661543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.661561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.421 [2024-07-15 20:18:26.661576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.421 [2024-07-15 20:18:26.661584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.661592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.422 [2024-07-15 20:18:26.661599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.661606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a1340 is same with the state(5) to be set 00:23:29.422 [2024-07-15 20:18:26.661630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.422 [2024-07-15 20:18:26.661638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.661646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.422 [2024-07-15 20:18:26.661653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.661662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.422 [2024-07-15 20:18:26.661669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.661677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.422 [2024-07-15 20:18:26.661684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.661691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198bca0 is same with the state(5) to be set 00:23:29.422 [2024-07-15 20:18:26.661710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.422 [2024-07-15 20:18:26.661718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.661725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.422 [2024-07-15 20:18:26.661733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.661741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.422 [2024-07-15 20:18:26.661748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.661755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.422 [2024-07-15 20:18:26.661762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.661769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c030 is same with the state(5) to be set 00:23:29.422 [2024-07-15 20:18:26.661795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.422 [2024-07-15 20:18:26.661803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.661811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.422 [2024-07-15 20:18:26.661818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.661826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.422 [2024-07-15 20:18:26.661832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.661840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.422 [2024-07-15 20:18:26.661847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.661854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aece90 is same with the state(5) to be set 00:23:29.422 [2024-07-15 20:18:26.661872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.422 [2024-07-15 20:18:26.661883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.661895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.422 [2024-07-15 20:18:26.661906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.661917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.422 [2024-07-15 20:18:26.661924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.661931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.422 [2024-07-15 20:18:26.661938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.661945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b100c0 is same with the state(5) to be set 00:23:29.422 [2024-07-15 20:18:26.661966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194f5d0 (9): Bad file descriptor 00:23:29.422 [2024-07-15 20:18:26.664640] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:29.422 [2024-07-15 20:18:26.664667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:29.422 [2024-07-15 20:18:26.664680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:29.422 [2024-07-15 20:18:26.664694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aece90 (9): Bad file descriptor 00:23:29.422 [2024-07-15 20:18:26.664704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198bca0 (9): Bad file descriptor 00:23:29.422 [2024-07-15 20:18:26.664753] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:29.422 [2024-07-15 20:18:26.664791] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:29.422 [2024-07-15 20:18:26.665003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:29.422 [2024-07-15 20:18:26.665016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:29.422 [2024-07-15 20:18:26.665029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:29.422 [2024-07-15 20:18:26.665066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.422 [2024-07-15 20:18:26.665381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.422 [2024-07-15 20:18:26.665389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.665989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.665998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.666005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.666014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.666021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.666030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.666037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.666046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.666053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.423 [2024-07-15 20:18:26.666062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.423 [2024-07-15 20:18:26.666069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.424 [2024-07-15 20:18:26.666080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.424 [2024-07-15 20:18:26.666087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.424 [2024-07-15 20:18:26.666096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.424 [2024-07-15 20:18:26.666102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.424 [2024-07-15 20:18:26.666110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a4e0 is same with the state(5) to be set 00:23:29.424 [2024-07-15 20:18:26.666167] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a8a4e0 was disconnected and freed. reset controller. 00:23:29.424 [2024-07-15 20:18:26.668346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.424 [2024-07-15 20:18:26.668775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.424 [2024-07-15 20:18:26.668790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198bca0 with addr=10.0.0.2, port=4420 00:23:29.424 [2024-07-15 20:18:26.668800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198bca0 is same with the state(5) to be set 00:23:29.424 [2024-07-15 20:18:26.669363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.424 [2024-07-15 20:18:26.669399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aece90 with addr=10.0.0.2, port=4420 00:23:29.424 [2024-07-15 20:18:26.669411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aece90 is same with the state(5) to be set 00:23:29.424 [2024-07-15 20:18:26.670758] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:29.424 [2024-07-15 20:18:26.670813] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:29.424 [2024-07-15 20:18:26.670899] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19f6770 was disconnected and freed. reset controller. 00:23:29.424 [2024-07-15 20:18:26.670942] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:29.424 [2024-07-15 20:18:26.670961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:29.424 [2024-07-15 20:18:26.670983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1b210 (9): Bad file descriptor 00:23:29.424 [2024-07-15 20:18:26.670995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198bca0 (9): Bad file descriptor 00:23:29.424 [2024-07-15 20:18:26.671005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aece90 (9): Bad file descriptor 00:23:29.424 [2024-07-15 20:18:26.671031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b23990 (9): Bad file descriptor 00:23:29.424 [2024-07-15 20:18:26.671048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19721b0 (9): Bad file descriptor 00:23:29.424 [2024-07-15 20:18:26.671065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b11290 (9): Bad file descriptor 00:23:29.424 [2024-07-15 20:18:26.671079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a1340 (9): Bad file descriptor 00:23:29.424 [2024-07-15 20:18:26.671096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198c030 (9): Bad file descriptor 00:23:29.424 [2024-07-15 20:18:26.671112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b100c0 (9): Bad file descriptor 00:23:29.424 [2024-07-15 20:18:26.671493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:29.424 [2024-07-15 20:18:26.671523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:29.424 [2024-07-15 20:18:26.671535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:29.424 [2024-07-15 20:18:26.671544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:29.424 [2024-07-15 20:18:26.671555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:29.424 [2024-07-15 20:18:26.671562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:29.424 [2024-07-15 20:18:26.671569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:29.424 [2024-07-15 20:18:26.671865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:29.424 [2024-07-15 20:18:26.671875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.424 [2024-07-15 20:18:26.671882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.424 [2024-07-15 20:18:26.672476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.424 [2024-07-15 20:18:26.672511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1b210 with addr=10.0.0.2, port=4420 00:23:29.424 [2024-07-15 20:18:26.672522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1b210 is same with the state(5) to be set 00:23:29.424 [2024-07-15 20:18:26.672984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.424 [2024-07-15 20:18:26.672995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b11290 with addr=10.0.0.2, port=4420 00:23:29.424 [2024-07-15 20:18:26.673002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b11290 is same with the state(5) to be set 00:23:29.424 [2024-07-15 20:18:26.673507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.424 [2024-07-15 20:18:26.673545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x194f5d0 with addr=10.0.0.2, port=4420 00:23:29.424 [2024-07-15 20:18:26.673556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194f5d0 is same with the state(5) to be set 00:23:29.424 [2024-07-15 20:18:26.673570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1b210 (9): Bad file descriptor 00:23:29.424 [2024-07-15 20:18:26.673582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b11290 (9): Bad file descriptor 00:23:29.424 [2024-07-15 20:18:26.673633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194f5d0 (9): Bad file descriptor 00:23:29.424 [2024-07-15 20:18:26.673643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:29.424 [2024-07-15 20:18:26.673650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:29.424 [2024-07-15 20:18:26.673657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:29.424 [2024-07-15 20:18:26.673671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:29.424 [2024-07-15 20:18:26.673677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:29.424 [2024-07-15 20:18:26.673684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:29.424 [2024-07-15 20:18:26.673719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.424 [2024-07-15 20:18:26.673726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.424 [2024-07-15 20:18:26.673732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:29.424 [2024-07-15 20:18:26.673738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:29.424 [2024-07-15 20:18:26.673746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:29.424 [2024-07-15 20:18:26.673784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.424 [2024-07-15 20:18:26.675094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:29.424 [2024-07-15 20:18:26.675108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:29.424 [2024-07-15 20:18:26.675422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.424 [2024-07-15 20:18:26.675436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aece90 with addr=10.0.0.2, port=4420 00:23:29.424 [2024-07-15 20:18:26.675444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aece90 is same with the state(5) to be set 00:23:29.424 [2024-07-15 20:18:26.675901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.424 [2024-07-15 20:18:26.675911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198bca0 with addr=10.0.0.2, port=4420 00:23:29.424 [2024-07-15 20:18:26.675918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198bca0 is same with the state(5) to be set 00:23:29.424 [2024-07-15 20:18:26.675950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aece90 (9): Bad file descriptor 00:23:29.424 [2024-07-15 20:18:26.675960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198bca0 (9): Bad file descriptor 00:23:29.424 [2024-07-15 20:18:26.675991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:29.424 [2024-07-15 20:18:26.675998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:29.424 [2024-07-15 20:18:26.676005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:29.424 [2024-07-15 20:18:26.676016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:29.424 [2024-07-15 20:18:26.676022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:29.424 [2024-07-15 20:18:26.676029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:29.424 [2024-07-15 20:18:26.676062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.424 [2024-07-15 20:18:26.676069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.424 [2024-07-15 20:18:26.681100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.424 [2024-07-15 20:18:26.681115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.424 [2024-07-15 20:18:26.681135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.425 [2024-07-15 20:18:26.681808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.425 [2024-07-15 20:18:26.681815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.681824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.681831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.681841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.681848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.681857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.681864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.681873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.681880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.681889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.681896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.681905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.681912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.681921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.681928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.681937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.681944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.681953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.681960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.681969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.681977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.681987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.681993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.682003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.682011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.682021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.682028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.682037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.682044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.682053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.682060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.682069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.682076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.682085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.682092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.682101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.682108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.682117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.682128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.682137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.682144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.682153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.682160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.682168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8b970 is same with the state(5) to be set 00:23:29.426 [2024-07-15 20:18:26.683464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.426 [2024-07-15 20:18:26.683820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.426 [2024-07-15 20:18:26.683830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.683837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.683846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.683853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.683862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.683869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.683878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.683885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.683894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.683901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.683911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.683918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.683927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.683934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.683944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.683951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.683960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.683967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.683977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.683984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.683993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.427 [2024-07-15 20:18:26.684523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.427 [2024-07-15 20:18:26.684530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194a9c0 is same with the state(5) to be set 00:23:29.428 [2024-07-15 20:18:26.685798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.685812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.685827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.685835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.685845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.685853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.685862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.685869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.685878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.685886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.685895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.685902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.685912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.685918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.685928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.685935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.685944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.685951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.685960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.685967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.685977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.685984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.685993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.428 [2024-07-15 20:18:26.686443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.428 [2024-07-15 20:18:26.686452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.686859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.686867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f5300 is same with the state(5) to be set 00:23:29.429 [2024-07-15 20:18:26.688149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.688163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.688176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.688184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.688195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.688203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.688214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.688223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.688233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.688242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.688253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.688261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.688270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.688277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.688286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.688294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.688303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.688310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.688319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.688326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.688336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.688343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.688351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.688362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.688371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.688378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.688388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.429 [2024-07-15 20:18:26.688394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.429 [2024-07-15 20:18:26.688403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.688983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.688993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.689000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.689009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.689016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.689025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.689032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.689041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.689048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.689057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.689064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.689073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.689080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.430 [2024-07-15 20:18:26.689089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.430 [2024-07-15 20:18:26.689096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.689105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.689112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.689124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.689132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.689141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.689149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.689158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.689165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.689174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.689181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.689192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.689199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.689208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.689215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.689223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f7af0 is same with the state(5) to be set 00:23:29.431 [2024-07-15 20:18:26.690732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.690751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.690764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.690772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.690781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.690788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.690798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.690805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.690815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.690822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.690831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.690838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.690848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.690855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.690864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.690871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.690880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.690887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.690896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.690904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.690917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.690924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.690933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.690940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.690949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.690956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.690965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.690972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.690982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.690989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.690998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.431 [2024-07-15 20:18:26.691324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.431 [2024-07-15 20:18:26.691336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.432 [2024-07-15 20:18:26.691803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.432 [2024-07-15 20:18:26.691811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8f60 is same with the state(5) to be set 00:23:29.432 [2024-07-15 20:18:26.693301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:29.432 [2024-07-15 20:18:26.693323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:29.432 [2024-07-15 20:18:26.693333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:29.432 [2024-07-15 20:18:26.693343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:29.432 [2024-07-15 20:18:26.693433] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:29.432 task offset: 28672 on job bdev=Nvme1n1 fails 00:23:29.432 00:23:29.432 Latency(us) 00:23:29.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.432 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.432 Job: Nvme1n1 ended in about 0.93 seconds with error 00:23:29.432 Verification LBA range: start 0x0 length 0x400 00:23:29.432 Nvme1n1 : 0.93 206.76 12.92 68.92 0.00 229507.31 3549.87 244667.73 00:23:29.432 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.432 Job: Nvme2n1 ended in about 0.97 seconds with error 00:23:29.432 Verification LBA range: start 0x0 length 0x400 00:23:29.432 Nvme2n1 : 0.97 198.92 12.43 66.31 0.00 233930.03 20971.52 239424.85 00:23:29.432 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.432 Job: Nvme3n1 ended in about 0.98 seconds with error 00:23:29.432 Verification LBA range: start 0x0 length 0x400 00:23:29.432 Nvme3n1 : 0.98 201.44 12.59 65.44 0.00 227915.25 19223.89 228939.09 00:23:29.432 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.432 Job: Nvme4n1 ended in about 0.96 seconds with error 00:23:29.432 Verification LBA range: start 0x0 length 0x400 00:23:29.432 Nvme4n1 : 0.96 200.44 12.53 66.81 0.00 222608.85 22063.79 246415.36 00:23:29.432 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.432 Job: Nvme5n1 ended in about 0.98 seconds with error 00:23:29.432 Verification LBA range: start 0x0 length 0x400 00:23:29.432 Nvme5n1 : 0.98 130.57 8.16 65.29 0.00 298043.73 23265.28 248162.99 00:23:29.432 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.432 Job: Nvme6n1 ended in about 0.96 seconds with error 00:23:29.432 Verification LBA range: start 0x0 length 0x400 00:23:29.432 Nvme6n1 : 0.96 200.19 12.51 66.73 0.00 213446.61 22828.37 239424.85 00:23:29.432 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.432 Job: Nvme7n1 ended in about 0.98 seconds with error 00:23:29.432 Verification LBA range: start 0x0 length 0x400 00:23:29.432 Nvme7n1 : 0.98 195.39 12.21 65.13 0.00 214590.93 17148.59 248162.99 00:23:29.432 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.432 Verification LBA range: start 0x0 length 0x400 00:23:29.432 Nvme8n1 : 0.96 199.40 12.46 0.00 0.00 273319.25 22282.24 276125.01 00:23:29.432 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.432 Job: Nvme9n1 ended in about 0.98 seconds with error 00:23:29.432 Verification LBA range: start 0x0 length 0x400 00:23:29.432 Nvme9n1 : 0.98 129.95 8.12 64.98 0.00 274432.57 21954.56 263891.63 00:23:29.432 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.432 Job: Nvme10n1 ended in about 0.99 seconds with error 00:23:29.432 Verification LBA range: start 0x0 length 0x400 00:23:29.433 Nvme10n1 : 0.99 129.61 8.10 64.81 0.00 269129.96 37137.07 256901.12 00:23:29.433 =================================================================================================================== 00:23:29.433 Total : 1792.67 112.04 594.41 0.00 241990.91 3549.87 276125.01 00:23:29.433 [2024-07-15 20:18:26.717475] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:29.433 [2024-07-15 20:18:26.717520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:29.433 [2024-07-15 20:18:26.718065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.433 [2024-07-15 20:18:26.718083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b23990 with addr=10.0.0.2, port=4420 00:23:29.433 [2024-07-15 20:18:26.718094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b23990 is same with the state(5) to be set 00:23:29.433 [2024-07-15 20:18:26.718364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.433 [2024-07-15 20:18:26.718375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198c030 with addr=10.0.0.2, port=4420 00:23:29.433 [2024-07-15 20:18:26.718383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c030 is same with the state(5) to be set 00:23:29.433 [2024-07-15 20:18:26.718821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.433 [2024-07-15 20:18:26.718831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a1340 with addr=10.0.0.2, port=4420 00:23:29.433 [2024-07-15 20:18:26.718838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a1340 is same with the state(5) to be set 00:23:29.433 [2024-07-15 20:18:26.719263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.433 [2024-07-15 20:18:26.719274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19721b0 with addr=10.0.0.2, port=4420 00:23:29.433 [2024-07-15 20:18:26.719281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19721b0 is same with the state(5) to be set 00:23:29.433 [2024-07-15 20:18:26.720621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:29.433 [2024-07-15 20:18:26.720635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:29.433 [2024-07-15 20:18:26.720644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:29.433 [2024-07-15 20:18:26.720653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:29.433 [2024-07-15 20:18:26.720663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:29.433 [2024-07-15 20:18:26.721126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.433 [2024-07-15 20:18:26.721138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b100c0 with addr=10.0.0.2, port=4420 00:23:29.433 [2024-07-15 20:18:26.721151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b100c0 is same with the state(5) to be set 00:23:29.433 [2024-07-15 20:18:26.721163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b23990 (9): Bad file descriptor 00:23:29.433 [2024-07-15 20:18:26.721175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198c030 (9): Bad file descriptor 00:23:29.433 [2024-07-15 20:18:26.721184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a1340 (9): Bad file descriptor 00:23:29.433 [2024-07-15 20:18:26.721193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19721b0 (9): Bad file descriptor 00:23:29.433 [2024-07-15 20:18:26.721227] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:29.433 [2024-07-15 20:18:26.721239] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:29.433 [2024-07-15 20:18:26.721249] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:29.433 [2024-07-15 20:18:26.721259] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:29.433 [2024-07-15 20:18:26.721771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.433 [2024-07-15 20:18:26.721784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b11290 with addr=10.0.0.2, port=4420 00:23:29.433 [2024-07-15 20:18:26.721791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b11290 is same with the state(5) to be set 00:23:29.433 [2024-07-15 20:18:26.722363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.433 [2024-07-15 20:18:26.722402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1b210 with addr=10.0.0.2, port=4420 00:23:29.433 [2024-07-15 20:18:26.722413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1b210 is same with the state(5) to be set 00:23:29.433 [2024-07-15 20:18:26.722844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.433 [2024-07-15 20:18:26.722855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x194f5d0 with addr=10.0.0.2, port=4420 00:23:29.433 [2024-07-15 20:18:26.722862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194f5d0 is same with the state(5) to be set 00:23:29.433 [2024-07-15 20:18:26.723386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.433 [2024-07-15 20:18:26.723423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198bca0 with addr=10.0.0.2, port=4420 00:23:29.433 [2024-07-15 20:18:26.723434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198bca0 is same with the state(5) to be set 00:23:29.433 [2024-07-15 20:18:26.723641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.433 [2024-07-15 20:18:26.723652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aece90 with addr=10.0.0.2, port=4420 00:23:29.433 [2024-07-15 20:18:26.723659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aece90 is same with the state(5) to be set 00:23:29.433 [2024-07-15 20:18:26.723671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b100c0 (9): Bad file descriptor 00:23:29.433 [2024-07-15 20:18:26.723681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:29.433 [2024-07-15 20:18:26.723687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:29.433 [2024-07-15 20:18:26.723695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:29.433 [2024-07-15 20:18:26.723710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:29.433 [2024-07-15 20:18:26.723721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:29.433 [2024-07-15 20:18:26.723728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:29.433 [2024-07-15 20:18:26.723739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:29.433 [2024-07-15 20:18:26.723745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:29.433 [2024-07-15 20:18:26.723751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:29.433 [2024-07-15 20:18:26.723762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:29.433 [2024-07-15 20:18:26.723769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:29.433 [2024-07-15 20:18:26.723775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:29.433 [2024-07-15 20:18:26.723846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.433 [2024-07-15 20:18:26.723855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.433 [2024-07-15 20:18:26.723861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.433 [2024-07-15 20:18:26.723867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.433 [2024-07-15 20:18:26.723874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b11290 (9): Bad file descriptor 00:23:29.433 [2024-07-15 20:18:26.723883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1b210 (9): Bad file descriptor 00:23:29.433 [2024-07-15 20:18:26.723892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194f5d0 (9): Bad file descriptor 00:23:29.433 [2024-07-15 20:18:26.723901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198bca0 (9): Bad file descriptor 00:23:29.433 [2024-07-15 20:18:26.723910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aece90 (9): Bad file descriptor 00:23:29.433 [2024-07-15 20:18:26.723917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:29.433 [2024-07-15 20:18:26.723923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:29.433 [2024-07-15 20:18:26.723930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:29.433 [2024-07-15 20:18:26.723959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.433 [2024-07-15 20:18:26.723966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:29.433 [2024-07-15 20:18:26.723972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:29.433 [2024-07-15 20:18:26.723979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:29.433 [2024-07-15 20:18:26.723988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:29.433 [2024-07-15 20:18:26.723994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:29.433 [2024-07-15 20:18:26.724000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:29.433 [2024-07-15 20:18:26.724009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:29.433 [2024-07-15 20:18:26.724015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:29.433 [2024-07-15 20:18:26.724022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:29.434 [2024-07-15 20:18:26.724033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:29.434 [2024-07-15 20:18:26.724039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:29.434 [2024-07-15 20:18:26.724046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:29.434 [2024-07-15 20:18:26.724055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:29.434 [2024-07-15 20:18:26.724061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:29.434 [2024-07-15 20:18:26.724068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:29.434 [2024-07-15 20:18:26.724096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.434 [2024-07-15 20:18:26.724103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.434 [2024-07-15 20:18:26.724109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.434 [2024-07-15 20:18:26.724115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.434 [2024-07-15 20:18:26.724121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.695 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:29.695 20:18:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1065820 00:23:30.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1065820) - No such process 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:30.640 rmmod nvme_tcp 00:23:30.640 rmmod nvme_fabrics 00:23:30.640 rmmod nvme_keyring 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.640 20:18:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.222 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:33.222 00:23:33.222 real 0m7.703s 00:23:33.222 user 0m18.574s 00:23:33.222 sys 0m1.274s 00:23:33.222 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:33.222 20:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:33.222 ************************************ 00:23:33.222 END TEST nvmf_shutdown_tc3 00:23:33.222 ************************************ 00:23:33.222 20:18:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:33.222 20:18:30 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:33.222 00:23:33.222 real 0m32.522s 00:23:33.222 user 1m16.307s 00:23:33.222 sys 0m9.351s 00:23:33.222 20:18:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:33.222 20:18:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:33.222 ************************************ 00:23:33.222 END TEST nvmf_shutdown 00:23:33.222 ************************************ 00:23:33.222 20:18:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:33.222 20:18:30 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:33.222 20:18:30 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:33.222 20:18:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:33.222 20:18:30 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:33.222 20:18:30 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:33.222 20:18:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:33.222 20:18:30 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:33.222 20:18:30 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:33.222 20:18:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:33.222 20:18:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:33.222 20:18:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:33.222 ************************************ 00:23:33.222 START TEST nvmf_multicontroller 00:23:33.222 ************************************ 00:23:33.222 20:18:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:33.222 * Looking for test storage... 00:23:33.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.222 20:18:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.222 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:33.222 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.222 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.222 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.222 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.222 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.222 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.222 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.222 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.222 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.222 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:33.223 20:18:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:41.371 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:41.371 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:41.371 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:41.371 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.371 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:41.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:23:41.372 00:23:41.372 --- 10.0.0.2 ping statistics --- 00:23:41.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.372 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:23:41.372 00:23:41.372 --- 10.0.0.1 ping statistics --- 00:23:41.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.372 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1070611 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1070611 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1070611 ']' 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.372 20:18:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 [2024-07-15 20:18:37.687166] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:23:41.372 [2024-07-15 20:18:37.687232] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.372 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.372 [2024-07-15 20:18:37.775554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:41.372 [2024-07-15 20:18:37.869707] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.372 [2024-07-15 20:18:37.869765] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.372 [2024-07-15 20:18:37.869773] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.372 [2024-07-15 20:18:37.869780] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.372 [2024-07-15 20:18:37.869786] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.372 [2024-07-15 20:18:37.869925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.372 [2024-07-15 20:18:37.870092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.372 [2024-07-15 20:18:37.870091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 [2024-07-15 20:18:38.523498] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 Malloc0 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 [2024-07-15 20:18:38.599950] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 [2024-07-15 20:18:38.611901] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 Malloc1 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.373 20:18:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1070958 00:23:41.373 20:18:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:41.373 20:18:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:41.373 20:18:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1070958 /var/tmp/bdevperf.sock 00:23:41.373 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1070958 ']' 00:23:41.373 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.373 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.373 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.373 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.373 20:18:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.313 NVMe0n1 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.313 1 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.313 request: 00:23:42.313 { 00:23:42.313 "name": "NVMe0", 00:23:42.313 "trtype": "tcp", 00:23:42.313 "traddr": "10.0.0.2", 00:23:42.313 "adrfam": "ipv4", 00:23:42.313 "trsvcid": "4420", 00:23:42.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.313 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:42.313 "hostaddr": "10.0.0.2", 00:23:42.313 "hostsvcid": "60000", 00:23:42.313 "prchk_reftag": false, 00:23:42.313 "prchk_guard": false, 00:23:42.313 "hdgst": false, 00:23:42.313 "ddgst": false, 00:23:42.313 "method": "bdev_nvme_attach_controller", 00:23:42.313 "req_id": 1 00:23:42.313 } 00:23:42.313 Got JSON-RPC error response 00:23:42.313 response: 00:23:42.313 { 00:23:42.313 "code": -114, 00:23:42.313 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:42.313 } 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.313 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.575 request: 00:23:42.575 { 00:23:42.575 "name": "NVMe0", 00:23:42.575 "trtype": "tcp", 00:23:42.575 "traddr": "10.0.0.2", 00:23:42.575 "adrfam": "ipv4", 00:23:42.575 "trsvcid": "4420", 00:23:42.575 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:42.575 "hostaddr": "10.0.0.2", 00:23:42.575 "hostsvcid": "60000", 00:23:42.575 "prchk_reftag": false, 00:23:42.575 "prchk_guard": false, 00:23:42.575 "hdgst": false, 00:23:42.575 "ddgst": false, 00:23:42.575 "method": "bdev_nvme_attach_controller", 00:23:42.575 "req_id": 1 00:23:42.575 } 00:23:42.575 Got JSON-RPC error response 00:23:42.575 response: 00:23:42.575 { 00:23:42.575 "code": -114, 00:23:42.575 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:42.575 } 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.575 request: 00:23:42.575 { 00:23:42.575 "name": "NVMe0", 00:23:42.575 "trtype": "tcp", 00:23:42.575 "traddr": "10.0.0.2", 00:23:42.575 "adrfam": "ipv4", 00:23:42.575 "trsvcid": "4420", 00:23:42.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.575 "hostaddr": "10.0.0.2", 00:23:42.575 "hostsvcid": "60000", 00:23:42.575 "prchk_reftag": false, 00:23:42.575 "prchk_guard": false, 00:23:42.575 "hdgst": false, 00:23:42.575 "ddgst": false, 00:23:42.575 "multipath": "disable", 00:23:42.575 "method": "bdev_nvme_attach_controller", 00:23:42.575 "req_id": 1 00:23:42.575 } 00:23:42.575 Got JSON-RPC error response 00:23:42.575 response: 00:23:42.575 { 00:23:42.575 "code": -114, 00:23:42.575 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:42.575 } 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.575 request: 00:23:42.575 { 00:23:42.575 "name": "NVMe0", 00:23:42.575 "trtype": "tcp", 00:23:42.575 "traddr": "10.0.0.2", 00:23:42.575 "adrfam": "ipv4", 00:23:42.575 "trsvcid": "4420", 00:23:42.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.575 "hostaddr": "10.0.0.2", 00:23:42.575 "hostsvcid": "60000", 00:23:42.575 "prchk_reftag": false, 00:23:42.575 "prchk_guard": false, 00:23:42.575 "hdgst": false, 00:23:42.575 "ddgst": false, 00:23:42.575 "multipath": "failover", 00:23:42.575 "method": "bdev_nvme_attach_controller", 00:23:42.575 "req_id": 1 00:23:42.575 } 00:23:42.575 Got JSON-RPC error response 00:23:42.575 response: 00:23:42.575 { 00:23:42.575 "code": -114, 00:23:42.575 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:42.575 } 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.575 20:18:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.836 00:23:42.836 20:18:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.836 20:18:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:42.836 20:18:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.836 20:18:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.836 20:18:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.836 20:18:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:42.836 20:18:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.836 20:18:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.836 00:23:42.836 20:18:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.836 20:18:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:42.836 20:18:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:42.836 20:18:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.836 20:18:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.836 20:18:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.836 20:18:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:42.836 20:18:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:44.221 0 00:23:44.221 20:18:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1070958 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1070958 ']' 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1070958 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1070958 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1070958' 00:23:44.222 killing process with pid 1070958 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1070958 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1070958 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:44.222 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:44.222 [2024-07-15 20:18:38.730883] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:23:44.222 [2024-07-15 20:18:38.730942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1070958 ] 00:23:44.222 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.222 [2024-07-15 20:18:38.789899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.222 [2024-07-15 20:18:38.855471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.222 [2024-07-15 20:18:40.159769] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 8cffb0c9-01a9-4d4d-9f24-bcd0d3b197a5 already exists 00:23:44.222 [2024-07-15 20:18:40.159801] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:8cffb0c9-01a9-4d4d-9f24-bcd0d3b197a5 alias for bdev NVMe1n1 00:23:44.222 [2024-07-15 20:18:40.159810] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:44.222 Running I/O for 1 seconds... 00:23:44.222 00:23:44.222 Latency(us) 00:23:44.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.222 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:44.222 NVMe0n1 : 1.00 28193.58 110.13 0.00 0.00 4525.33 4096.00 11414.19 00:23:44.222 =================================================================================================================== 00:23:44.222 Total : 28193.58 110.13 0.00 0.00 4525.33 4096.00 11414.19 00:23:44.222 Received shutdown signal, test time was about 1.000000 seconds 00:23:44.222 00:23:44.222 Latency(us) 00:23:44.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.222 =================================================================================================================== 00:23:44.222 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:44.222 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:44.222 rmmod nvme_tcp 00:23:44.222 rmmod nvme_fabrics 00:23:44.222 rmmod nvme_keyring 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1070611 ']' 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1070611 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1070611 ']' 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1070611 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:44.222 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1070611 00:23:44.483 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:44.483 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:44.483 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1070611' 00:23:44.483 killing process with pid 1070611 00:23:44.483 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1070611 00:23:44.483 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1070611 00:23:44.483 20:18:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:44.483 20:18:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:44.483 20:18:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:44.483 20:18:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:44.483 20:18:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:44.483 20:18:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.483 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:44.483 20:18:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.029 20:18:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:47.029 00:23:47.029 real 0m13.640s 00:23:47.029 user 0m16.907s 00:23:47.029 sys 0m6.198s 00:23:47.029 20:18:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:47.029 20:18:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:47.029 ************************************ 00:23:47.029 END TEST nvmf_multicontroller 00:23:47.029 ************************************ 00:23:47.029 20:18:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:47.029 20:18:43 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:47.029 20:18:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:47.029 20:18:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:47.029 20:18:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:47.029 ************************************ 00:23:47.029 START TEST nvmf_aer 00:23:47.029 ************************************ 00:23:47.029 20:18:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:47.029 * Looking for test storage... 00:23:47.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:47.029 20:18:44 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.029 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:47.029 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.029 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.029 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.029 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.029 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.029 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.029 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.029 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.029 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.029 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:47.030 20:18:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:53.615 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:53.615 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:53.615 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:53.615 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.615 20:18:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:53.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:23:53.876 00:23:53.876 --- 10.0.0.2 ping statistics --- 00:23:53.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.876 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:23:53.876 00:23:53.876 --- 10.0.0.1 ping statistics --- 00:23:53.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.876 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:53.876 20:18:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.136 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1075633 00:23:54.136 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1075633 00:23:54.136 20:18:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:54.136 20:18:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1075633 ']' 00:23:54.136 20:18:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.136 20:18:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:54.136 20:18:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.136 20:18:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:54.136 20:18:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.136 [2024-07-15 20:18:51.357224] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:23:54.136 [2024-07-15 20:18:51.357276] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.136 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.136 [2024-07-15 20:18:51.423405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:54.136 [2024-07-15 20:18:51.488530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.136 [2024-07-15 20:18:51.488568] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.136 [2024-07-15 20:18:51.488576] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.136 [2024-07-15 20:18:51.488582] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.136 [2024-07-15 20:18:51.488588] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.136 [2024-07-15 20:18:51.488729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.136 [2024-07-15 20:18:51.488847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.136 [2024-07-15 20:18:51.489004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.136 [2024-07-15 20:18:51.489005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:54.707 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:54.707 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:23:54.707 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:54.707 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:54.707 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.968 [2024-07-15 20:18:52.175785] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.968 Malloc0 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.968 [2024-07-15 20:18:52.235342] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.968 [ 00:23:54.968 { 00:23:54.968 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:54.968 "subtype": "Discovery", 00:23:54.968 "listen_addresses": [], 00:23:54.968 "allow_any_host": true, 00:23:54.968 "hosts": [] 00:23:54.968 }, 00:23:54.968 { 00:23:54.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.968 "subtype": "NVMe", 00:23:54.968 "listen_addresses": [ 00:23:54.968 { 00:23:54.968 "trtype": "TCP", 00:23:54.968 "adrfam": "IPv4", 00:23:54.968 "traddr": "10.0.0.2", 00:23:54.968 "trsvcid": "4420" 00:23:54.968 } 00:23:54.968 ], 00:23:54.968 "allow_any_host": true, 00:23:54.968 "hosts": [], 00:23:54.968 "serial_number": "SPDK00000000000001", 00:23:54.968 "model_number": "SPDK bdev Controller", 00:23:54.968 "max_namespaces": 2, 00:23:54.968 "min_cntlid": 1, 00:23:54.968 "max_cntlid": 65519, 00:23:54.968 "namespaces": [ 00:23:54.968 { 00:23:54.968 "nsid": 1, 00:23:54.968 "bdev_name": "Malloc0", 00:23:54.968 "name": "Malloc0", 00:23:54.968 "nguid": "DA8C62A884254433891A04F089426B80", 00:23:54.968 "uuid": "da8c62a8-8425-4433-891a-04f089426b80" 00:23:54.968 } 00:23:54.968 ] 00:23:54.968 } 00:23:54.968 ] 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1075800 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:54.968 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:54.969 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:54.969 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:54.969 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:54.969 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:54.969 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.969 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:54.969 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:54.969 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:54.969 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.230 Malloc1 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.230 Asynchronous Event Request test 00:23:55.230 Attaching to 10.0.0.2 00:23:55.230 Attached to 10.0.0.2 00:23:55.230 Registering asynchronous event callbacks... 00:23:55.230 Starting namespace attribute notice tests for all controllers... 00:23:55.230 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:55.230 aer_cb - Changed Namespace 00:23:55.230 Cleaning up... 00:23:55.230 [ 00:23:55.230 { 00:23:55.230 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:55.230 "subtype": "Discovery", 00:23:55.230 "listen_addresses": [], 00:23:55.230 "allow_any_host": true, 00:23:55.230 "hosts": [] 00:23:55.230 }, 00:23:55.230 { 00:23:55.230 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.230 "subtype": "NVMe", 00:23:55.230 "listen_addresses": [ 00:23:55.230 { 00:23:55.230 "trtype": "TCP", 00:23:55.230 "adrfam": "IPv4", 00:23:55.230 "traddr": "10.0.0.2", 00:23:55.230 "trsvcid": "4420" 00:23:55.230 } 00:23:55.230 ], 00:23:55.230 "allow_any_host": true, 00:23:55.230 "hosts": [], 00:23:55.230 "serial_number": "SPDK00000000000001", 00:23:55.230 "model_number": "SPDK bdev Controller", 00:23:55.230 "max_namespaces": 2, 00:23:55.230 "min_cntlid": 1, 00:23:55.230 "max_cntlid": 65519, 00:23:55.230 "namespaces": [ 00:23:55.230 { 00:23:55.230 "nsid": 1, 00:23:55.230 "bdev_name": "Malloc0", 00:23:55.230 "name": "Malloc0", 00:23:55.230 "nguid": "DA8C62A884254433891A04F089426B80", 00:23:55.230 "uuid": "da8c62a8-8425-4433-891a-04f089426b80" 00:23:55.230 }, 00:23:55.230 { 00:23:55.230 "nsid": 2, 00:23:55.230 "bdev_name": "Malloc1", 00:23:55.230 "name": "Malloc1", 00:23:55.230 "nguid": "0B1E3B65F81C48859E14977C60D68852", 00:23:55.230 "uuid": "0b1e3b65-f81c-4885-9e14-977c60d68852" 00:23:55.230 } 00:23:55.230 ] 00:23:55.230 } 00:23:55.230 ] 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1075800 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:55.230 rmmod nvme_tcp 00:23:55.230 rmmod nvme_fabrics 00:23:55.230 rmmod nvme_keyring 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1075633 ']' 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1075633 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1075633 ']' 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1075633 00:23:55.230 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:23:55.491 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.491 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1075633 00:23:55.491 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:55.491 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:55.491 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1075633' 00:23:55.491 killing process with pid 1075633 00:23:55.491 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1075633 00:23:55.491 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1075633 00:23:55.491 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:55.491 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:55.491 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:55.491 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:55.491 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:55.491 20:18:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.491 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:55.491 20:18:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.043 20:18:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:58.043 00:23:58.043 real 0m10.964s 00:23:58.043 user 0m7.551s 00:23:58.043 sys 0m5.729s 00:23:58.043 20:18:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:58.043 20:18:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:58.043 ************************************ 00:23:58.043 END TEST nvmf_aer 00:23:58.043 ************************************ 00:23:58.043 20:18:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:58.043 20:18:54 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:58.043 20:18:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:58.043 20:18:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:58.043 20:18:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:58.043 ************************************ 00:23:58.043 START TEST nvmf_async_init 00:23:58.043 ************************************ 00:23:58.043 20:18:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:58.043 * Looking for test storage... 00:23:58.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:58.043 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=94047df373ab48c9b7a528619af9704a 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:58.044 20:18:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:04.633 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:04.633 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:04.633 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:04.634 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:04.634 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:04.634 20:19:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:04.634 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:04.634 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:04.634 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:04.634 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:04.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:04.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:24:04.895 00:24:04.895 --- 10.0.0.2 ping statistics --- 00:24:04.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.895 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:04.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:04.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.480 ms 00:24:04.895 00:24:04.895 --- 10.0.0.1 ping statistics --- 00:24:04.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.895 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1079982 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1079982 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1079982 ']' 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.895 20:19:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:04.895 [2024-07-15 20:19:02.286291] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:24:04.895 [2024-07-15 20:19:02.286357] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.895 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.155 [2024-07-15 20:19:02.356441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.155 [2024-07-15 20:19:02.429623] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.155 [2024-07-15 20:19:02.429661] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.155 [2024-07-15 20:19:02.429668] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.155 [2024-07-15 20:19:02.429674] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.155 [2024-07-15 20:19:02.429680] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.155 [2024-07-15 20:19:02.429699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.726 [2024-07-15 20:19:03.088556] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.726 null0 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 94047df373ab48c9b7a528619af9704a 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.726 [2024-07-15 20:19:03.128753] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.726 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.987 nvme0n1 00:24:05.987 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.987 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:05.987 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.987 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.987 [ 00:24:05.987 { 00:24:05.987 "name": "nvme0n1", 00:24:05.987 "aliases": [ 00:24:05.987 "94047df3-73ab-48c9-b7a5-28619af9704a" 00:24:05.987 ], 00:24:05.987 "product_name": "NVMe disk", 00:24:05.987 "block_size": 512, 00:24:05.987 "num_blocks": 2097152, 00:24:05.987 "uuid": "94047df3-73ab-48c9-b7a5-28619af9704a", 00:24:05.987 "assigned_rate_limits": { 00:24:05.987 "rw_ios_per_sec": 0, 00:24:05.987 "rw_mbytes_per_sec": 0, 00:24:05.987 "r_mbytes_per_sec": 0, 00:24:05.987 "w_mbytes_per_sec": 0 00:24:05.987 }, 00:24:05.987 "claimed": false, 00:24:05.987 "zoned": false, 00:24:05.987 "supported_io_types": { 00:24:05.987 "read": true, 00:24:05.987 "write": true, 00:24:05.987 "unmap": false, 00:24:05.987 "flush": true, 00:24:05.987 "reset": true, 00:24:05.987 "nvme_admin": true, 00:24:05.987 "nvme_io": true, 00:24:05.987 "nvme_io_md": false, 00:24:05.987 "write_zeroes": true, 00:24:05.987 "zcopy": false, 00:24:05.987 "get_zone_info": false, 00:24:05.987 "zone_management": false, 00:24:05.987 "zone_append": false, 00:24:05.987 "compare": true, 00:24:05.987 "compare_and_write": true, 00:24:05.987 "abort": true, 00:24:05.987 "seek_hole": false, 00:24:05.987 "seek_data": false, 00:24:05.987 "copy": true, 00:24:05.987 "nvme_iov_md": false 00:24:05.987 }, 00:24:05.987 "memory_domains": [ 00:24:05.987 { 00:24:05.987 "dma_device_id": "system", 00:24:05.987 "dma_device_type": 1 00:24:05.987 } 00:24:05.987 ], 00:24:05.987 "driver_specific": { 00:24:05.987 "nvme": [ 00:24:05.987 { 00:24:05.987 "trid": { 00:24:05.987 "trtype": "TCP", 00:24:05.987 "adrfam": "IPv4", 00:24:05.987 "traddr": "10.0.0.2", 00:24:05.987 "trsvcid": "4420", 00:24:05.987 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:05.987 }, 00:24:05.987 "ctrlr_data": { 00:24:05.987 "cntlid": 1, 00:24:05.987 "vendor_id": "0x8086", 00:24:05.987 "model_number": "SPDK bdev Controller", 00:24:05.987 "serial_number": "00000000000000000000", 00:24:05.987 "firmware_revision": "24.09", 00:24:05.987 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:05.987 "oacs": { 00:24:05.987 "security": 0, 00:24:05.987 "format": 0, 00:24:05.987 "firmware": 0, 00:24:05.987 "ns_manage": 0 00:24:05.987 }, 00:24:05.987 "multi_ctrlr": true, 00:24:05.987 "ana_reporting": false 00:24:05.987 }, 00:24:05.987 "vs": { 00:24:05.987 "nvme_version": "1.3" 00:24:05.987 }, 00:24:05.987 "ns_data": { 00:24:05.987 "id": 1, 00:24:05.987 "can_share": true 00:24:05.987 } 00:24:05.987 } 00:24:05.987 ], 00:24:05.987 "mp_policy": "active_passive" 00:24:05.987 } 00:24:05.987 } 00:24:05.987 ] 00:24:05.987 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.987 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:05.987 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.987 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.987 [2024-07-15 20:19:03.377434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:05.987 [2024-07-15 20:19:03.377493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a00df0 (9): Bad file descriptor 00:24:06.247 [2024-07-15 20:19:03.509220] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:06.247 [ 00:24:06.247 { 00:24:06.247 "name": "nvme0n1", 00:24:06.247 "aliases": [ 00:24:06.247 "94047df3-73ab-48c9-b7a5-28619af9704a" 00:24:06.247 ], 00:24:06.247 "product_name": "NVMe disk", 00:24:06.247 "block_size": 512, 00:24:06.247 "num_blocks": 2097152, 00:24:06.247 "uuid": "94047df3-73ab-48c9-b7a5-28619af9704a", 00:24:06.247 "assigned_rate_limits": { 00:24:06.247 "rw_ios_per_sec": 0, 00:24:06.247 "rw_mbytes_per_sec": 0, 00:24:06.247 "r_mbytes_per_sec": 0, 00:24:06.247 "w_mbytes_per_sec": 0 00:24:06.247 }, 00:24:06.247 "claimed": false, 00:24:06.247 "zoned": false, 00:24:06.247 "supported_io_types": { 00:24:06.247 "read": true, 00:24:06.247 "write": true, 00:24:06.247 "unmap": false, 00:24:06.247 "flush": true, 00:24:06.247 "reset": true, 00:24:06.247 "nvme_admin": true, 00:24:06.247 "nvme_io": true, 00:24:06.247 "nvme_io_md": false, 00:24:06.247 "write_zeroes": true, 00:24:06.247 "zcopy": false, 00:24:06.247 "get_zone_info": false, 00:24:06.247 "zone_management": false, 00:24:06.247 "zone_append": false, 00:24:06.247 "compare": true, 00:24:06.247 "compare_and_write": true, 00:24:06.247 "abort": true, 00:24:06.247 "seek_hole": false, 00:24:06.247 "seek_data": false, 00:24:06.247 "copy": true, 00:24:06.247 "nvme_iov_md": false 00:24:06.247 }, 00:24:06.247 "memory_domains": [ 00:24:06.247 { 00:24:06.247 "dma_device_id": "system", 00:24:06.247 "dma_device_type": 1 00:24:06.247 } 00:24:06.247 ], 00:24:06.247 "driver_specific": { 00:24:06.247 "nvme": [ 00:24:06.247 { 00:24:06.247 "trid": { 00:24:06.247 "trtype": "TCP", 00:24:06.247 "adrfam": "IPv4", 00:24:06.247 "traddr": "10.0.0.2", 00:24:06.247 "trsvcid": "4420", 00:24:06.247 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:06.247 }, 00:24:06.247 "ctrlr_data": { 00:24:06.247 "cntlid": 2, 00:24:06.247 "vendor_id": "0x8086", 00:24:06.247 "model_number": "SPDK bdev Controller", 00:24:06.247 "serial_number": "00000000000000000000", 00:24:06.247 "firmware_revision": "24.09", 00:24:06.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:06.247 "oacs": { 00:24:06.247 "security": 0, 00:24:06.247 "format": 0, 00:24:06.247 "firmware": 0, 00:24:06.247 "ns_manage": 0 00:24:06.247 }, 00:24:06.247 "multi_ctrlr": true, 00:24:06.247 "ana_reporting": false 00:24:06.247 }, 00:24:06.247 "vs": { 00:24:06.247 "nvme_version": "1.3" 00:24:06.247 }, 00:24:06.247 "ns_data": { 00:24:06.247 "id": 1, 00:24:06.247 "can_share": true 00:24:06.247 } 00:24:06.247 } 00:24:06.247 ], 00:24:06.247 "mp_policy": "active_passive" 00:24:06.247 } 00:24:06.247 } 00:24:06.247 ] 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.GGJpFskR3i 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.GGJpFskR3i 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.247 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:06.247 [2024-07-15 20:19:03.562032] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:06.247 [2024-07-15 20:19:03.562149] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GGJpFskR3i 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:06.248 [2024-07-15 20:19:03.570045] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GGJpFskR3i 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:06.248 [2024-07-15 20:19:03.578087] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:06.248 [2024-07-15 20:19:03.578126] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:06.248 nvme0n1 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:06.248 [ 00:24:06.248 { 00:24:06.248 "name": "nvme0n1", 00:24:06.248 "aliases": [ 00:24:06.248 "94047df3-73ab-48c9-b7a5-28619af9704a" 00:24:06.248 ], 00:24:06.248 "product_name": "NVMe disk", 00:24:06.248 "block_size": 512, 00:24:06.248 "num_blocks": 2097152, 00:24:06.248 "uuid": "94047df3-73ab-48c9-b7a5-28619af9704a", 00:24:06.248 "assigned_rate_limits": { 00:24:06.248 "rw_ios_per_sec": 0, 00:24:06.248 "rw_mbytes_per_sec": 0, 00:24:06.248 "r_mbytes_per_sec": 0, 00:24:06.248 "w_mbytes_per_sec": 0 00:24:06.248 }, 00:24:06.248 "claimed": false, 00:24:06.248 "zoned": false, 00:24:06.248 "supported_io_types": { 00:24:06.248 "read": true, 00:24:06.248 "write": true, 00:24:06.248 "unmap": false, 00:24:06.248 "flush": true, 00:24:06.248 "reset": true, 00:24:06.248 "nvme_admin": true, 00:24:06.248 "nvme_io": true, 00:24:06.248 "nvme_io_md": false, 00:24:06.248 "write_zeroes": true, 00:24:06.248 "zcopy": false, 00:24:06.248 "get_zone_info": false, 00:24:06.248 "zone_management": false, 00:24:06.248 "zone_append": false, 00:24:06.248 "compare": true, 00:24:06.248 "compare_and_write": true, 00:24:06.248 "abort": true, 00:24:06.248 "seek_hole": false, 00:24:06.248 "seek_data": false, 00:24:06.248 "copy": true, 00:24:06.248 "nvme_iov_md": false 00:24:06.248 }, 00:24:06.248 "memory_domains": [ 00:24:06.248 { 00:24:06.248 "dma_device_id": "system", 00:24:06.248 "dma_device_type": 1 00:24:06.248 } 00:24:06.248 ], 00:24:06.248 "driver_specific": { 00:24:06.248 "nvme": [ 00:24:06.248 { 00:24:06.248 "trid": { 00:24:06.248 "trtype": "TCP", 00:24:06.248 "adrfam": "IPv4", 00:24:06.248 "traddr": "10.0.0.2", 00:24:06.248 "trsvcid": "4421", 00:24:06.248 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:06.248 }, 00:24:06.248 "ctrlr_data": { 00:24:06.248 "cntlid": 3, 00:24:06.248 "vendor_id": "0x8086", 00:24:06.248 "model_number": "SPDK bdev Controller", 00:24:06.248 "serial_number": "00000000000000000000", 00:24:06.248 "firmware_revision": "24.09", 00:24:06.248 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:06.248 "oacs": { 00:24:06.248 "security": 0, 00:24:06.248 "format": 0, 00:24:06.248 "firmware": 0, 00:24:06.248 "ns_manage": 0 00:24:06.248 }, 00:24:06.248 "multi_ctrlr": true, 00:24:06.248 "ana_reporting": false 00:24:06.248 }, 00:24:06.248 "vs": { 00:24:06.248 "nvme_version": "1.3" 00:24:06.248 }, 00:24:06.248 "ns_data": { 00:24:06.248 "id": 1, 00:24:06.248 "can_share": true 00:24:06.248 } 00:24:06.248 } 00:24:06.248 ], 00:24:06.248 "mp_policy": "active_passive" 00:24:06.248 } 00:24:06.248 } 00:24:06.248 ] 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.GGJpFskR3i 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:06.248 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:06.509 rmmod nvme_tcp 00:24:06.509 rmmod nvme_fabrics 00:24:06.509 rmmod nvme_keyring 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1079982 ']' 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1079982 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1079982 ']' 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1079982 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1079982 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1079982' 00:24:06.509 killing process with pid 1079982 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1079982 00:24:06.509 [2024-07-15 20:19:03.791552] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:06.509 [2024-07-15 20:19:03.791578] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1079982 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.509 20:19:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.055 20:19:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:09.055 00:24:09.055 real 0m10.985s 00:24:09.055 user 0m3.809s 00:24:09.055 sys 0m5.582s 00:24:09.055 20:19:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:09.055 20:19:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.055 ************************************ 00:24:09.055 END TEST nvmf_async_init 00:24:09.055 ************************************ 00:24:09.055 20:19:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:09.055 20:19:06 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:09.055 20:19:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:09.056 20:19:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:09.056 20:19:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:09.056 ************************************ 00:24:09.056 START TEST dma 00:24:09.056 ************************************ 00:24:09.056 20:19:06 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:09.056 * Looking for test storage... 00:24:09.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:09.056 20:19:06 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.056 20:19:06 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.056 20:19:06 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.056 20:19:06 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.056 20:19:06 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.056 20:19:06 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.056 20:19:06 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.056 20:19:06 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:09.056 20:19:06 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:09.056 20:19:06 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:09.056 20:19:06 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:09.056 20:19:06 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:09.056 00:24:09.056 real 0m0.139s 00:24:09.056 user 0m0.058s 00:24:09.056 sys 0m0.090s 00:24:09.056 20:19:06 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:09.056 20:19:06 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:09.056 ************************************ 00:24:09.056 END TEST dma 00:24:09.056 ************************************ 00:24:09.056 20:19:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:09.056 20:19:06 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:09.056 20:19:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:09.056 20:19:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:09.056 20:19:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:09.056 ************************************ 00:24:09.056 START TEST nvmf_identify 00:24:09.056 ************************************ 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:09.056 * Looking for test storage... 00:24:09.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:09.056 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.057 20:19:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:09.057 20:19:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.057 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:09.057 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:09.057 20:19:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:09.057 20:19:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.200 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.200 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:17.200 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:17.200 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:17.200 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:17.201 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:17.201 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:17.201 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:17.201 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.201 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:17.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:24:17.202 00:24:17.202 --- 10.0.0.2 ping statistics --- 00:24:17.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.202 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:24:17.202 00:24:17.202 --- 10.0.0.1 ping statistics --- 00:24:17.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.202 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1084501 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1084501 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1084501 ']' 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:17.202 20:19:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.202 [2024-07-15 20:19:13.678724] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:24:17.202 [2024-07-15 20:19:13.678785] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.202 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.202 [2024-07-15 20:19:13.750024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:17.202 [2024-07-15 20:19:13.826993] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.202 [2024-07-15 20:19:13.827032] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.202 [2024-07-15 20:19:13.827040] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.202 [2024-07-15 20:19:13.827046] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.202 [2024-07-15 20:19:13.827052] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.202 [2024-07-15 20:19:13.827161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.202 [2024-07-15 20:19:13.827384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.202 [2024-07-15 20:19:13.827385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.202 [2024-07-15 20:19:13.827239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.202 [2024-07-15 20:19:14.464563] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.202 Malloc0 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.202 [2024-07-15 20:19:14.560043] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.202 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.203 20:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:17.203 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.203 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.203 [ 00:24:17.203 { 00:24:17.203 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:17.203 "subtype": "Discovery", 00:24:17.203 "listen_addresses": [ 00:24:17.203 { 00:24:17.203 "trtype": "TCP", 00:24:17.203 "adrfam": "IPv4", 00:24:17.203 "traddr": "10.0.0.2", 00:24:17.203 "trsvcid": "4420" 00:24:17.203 } 00:24:17.203 ], 00:24:17.203 "allow_any_host": true, 00:24:17.203 "hosts": [] 00:24:17.203 }, 00:24:17.203 { 00:24:17.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.203 "subtype": "NVMe", 00:24:17.203 "listen_addresses": [ 00:24:17.203 { 00:24:17.203 "trtype": "TCP", 00:24:17.203 "adrfam": "IPv4", 00:24:17.203 "traddr": "10.0.0.2", 00:24:17.203 "trsvcid": "4420" 00:24:17.203 } 00:24:17.203 ], 00:24:17.203 "allow_any_host": true, 00:24:17.203 "hosts": [], 00:24:17.203 "serial_number": "SPDK00000000000001", 00:24:17.203 "model_number": "SPDK bdev Controller", 00:24:17.203 "max_namespaces": 32, 00:24:17.203 "min_cntlid": 1, 00:24:17.203 "max_cntlid": 65519, 00:24:17.203 "namespaces": [ 00:24:17.203 { 00:24:17.203 "nsid": 1, 00:24:17.203 "bdev_name": "Malloc0", 00:24:17.203 "name": "Malloc0", 00:24:17.203 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:17.203 "eui64": "ABCDEF0123456789", 00:24:17.203 "uuid": "49e8dee6-f213-4085-a306-05ebb5774d79" 00:24:17.203 } 00:24:17.203 ] 00:24:17.203 } 00:24:17.203 ] 00:24:17.203 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.203 20:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:17.203 [2024-07-15 20:19:14.623515] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:24:17.203 [2024-07-15 20:19:14.623585] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084724 ] 00:24:17.203 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.465 [2024-07-15 20:19:14.655773] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:17.465 [2024-07-15 20:19:14.655826] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:17.465 [2024-07-15 20:19:14.655831] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:17.465 [2024-07-15 20:19:14.655841] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:17.465 [2024-07-15 20:19:14.655847] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:17.465 [2024-07-15 20:19:14.659153] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:17.465 [2024-07-15 20:19:14.659182] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb78ec0 0 00:24:17.465 [2024-07-15 20:19:14.667133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:17.465 [2024-07-15 20:19:14.667146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:17.465 [2024-07-15 20:19:14.667150] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:17.465 [2024-07-15 20:19:14.667153] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:17.465 [2024-07-15 20:19:14.667189] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.465 [2024-07-15 20:19:14.667194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.465 [2024-07-15 20:19:14.667198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb78ec0) 00:24:17.466 [2024-07-15 20:19:14.667211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:17.466 [2024-07-15 20:19:14.667228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfbe40, cid 0, qid 0 00:24:17.466 [2024-07-15 20:19:14.674131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.466 [2024-07-15 20:19:14.674140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.466 [2024-07-15 20:19:14.674144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.674148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfbe40) on tqpair=0xb78ec0 00:24:17.466 [2024-07-15 20:19:14.674157] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:17.466 [2024-07-15 20:19:14.674164] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:17.466 [2024-07-15 20:19:14.674169] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:17.466 [2024-07-15 20:19:14.674182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.674186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.674189] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb78ec0) 00:24:17.466 [2024-07-15 20:19:14.674196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.466 [2024-07-15 20:19:14.674209] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfbe40, cid 0, qid 0 00:24:17.466 [2024-07-15 20:19:14.674457] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.466 [2024-07-15 20:19:14.674464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.466 [2024-07-15 20:19:14.674467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.674471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfbe40) on tqpair=0xb78ec0 00:24:17.466 [2024-07-15 20:19:14.674476] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:17.466 [2024-07-15 20:19:14.674483] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:17.466 [2024-07-15 20:19:14.674490] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.674494] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.674497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb78ec0) 00:24:17.466 [2024-07-15 20:19:14.674504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.466 [2024-07-15 20:19:14.674515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfbe40, cid 0, qid 0 00:24:17.466 [2024-07-15 20:19:14.674740] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.466 [2024-07-15 20:19:14.674747] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.466 [2024-07-15 20:19:14.674750] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.674754] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfbe40) on tqpair=0xb78ec0 00:24:17.466 [2024-07-15 20:19:14.674759] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:17.466 [2024-07-15 20:19:14.674766] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:17.466 [2024-07-15 20:19:14.674773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.674776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.674780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb78ec0) 00:24:17.466 [2024-07-15 20:19:14.674787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.466 [2024-07-15 20:19:14.674797] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfbe40, cid 0, qid 0 00:24:17.466 [2024-07-15 20:19:14.674994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.466 [2024-07-15 20:19:14.675000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.466 [2024-07-15 20:19:14.675004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.675007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfbe40) on tqpair=0xb78ec0 00:24:17.466 [2024-07-15 20:19:14.675012] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:17.466 [2024-07-15 20:19:14.675021] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.675025] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.675028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb78ec0) 00:24:17.466 [2024-07-15 20:19:14.675035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.466 [2024-07-15 20:19:14.675045] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfbe40, cid 0, qid 0 00:24:17.466 [2024-07-15 20:19:14.675251] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.466 [2024-07-15 20:19:14.675259] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.466 [2024-07-15 20:19:14.675262] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.675266] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfbe40) on tqpair=0xb78ec0 00:24:17.466 [2024-07-15 20:19:14.675271] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:17.466 [2024-07-15 20:19:14.675276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:17.466 [2024-07-15 20:19:14.675283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:17.466 [2024-07-15 20:19:14.675388] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:17.466 [2024-07-15 20:19:14.675392] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:17.466 [2024-07-15 20:19:14.675401] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.675404] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.675408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb78ec0) 00:24:17.466 [2024-07-15 20:19:14.675414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.466 [2024-07-15 20:19:14.675425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfbe40, cid 0, qid 0 00:24:17.466 [2024-07-15 20:19:14.675632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.466 [2024-07-15 20:19:14.675638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.466 [2024-07-15 20:19:14.675641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.675645] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfbe40) on tqpair=0xb78ec0 00:24:17.466 [2024-07-15 20:19:14.675650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:17.466 [2024-07-15 20:19:14.675659] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.675663] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.675666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb78ec0) 00:24:17.466 [2024-07-15 20:19:14.675673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.466 [2024-07-15 20:19:14.675685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfbe40, cid 0, qid 0 00:24:17.466 [2024-07-15 20:19:14.675919] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.466 [2024-07-15 20:19:14.675925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.466 [2024-07-15 20:19:14.675929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.466 [2024-07-15 20:19:14.675932] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfbe40) on tqpair=0xb78ec0 00:24:17.466 [2024-07-15 20:19:14.675937] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:17.466 [2024-07-15 20:19:14.675942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:17.467 [2024-07-15 20:19:14.675949] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:17.467 [2024-07-15 20:19:14.675962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:17.467 [2024-07-15 20:19:14.675971] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.675974] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb78ec0) 00:24:17.467 [2024-07-15 20:19:14.675981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.467 [2024-07-15 20:19:14.675991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfbe40, cid 0, qid 0 00:24:17.467 [2024-07-15 20:19:14.676215] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.467 [2024-07-15 20:19:14.676222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.467 [2024-07-15 20:19:14.676225] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.676229] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb78ec0): datao=0, datal=4096, cccid=0 00:24:17.467 [2024-07-15 20:19:14.676234] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfbe40) on tqpair(0xb78ec0): expected_datao=0, payload_size=4096 00:24:17.467 [2024-07-15 20:19:14.676238] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.676285] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.676290] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.717322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.467 [2024-07-15 20:19:14.717336] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.467 [2024-07-15 20:19:14.717340] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.717344] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfbe40) on tqpair=0xb78ec0 00:24:17.467 [2024-07-15 20:19:14.717352] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:17.467 [2024-07-15 20:19:14.717361] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:17.467 [2024-07-15 20:19:14.717365] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:17.467 [2024-07-15 20:19:14.717370] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:17.467 [2024-07-15 20:19:14.717375] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:17.467 [2024-07-15 20:19:14.717380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:17.467 [2024-07-15 20:19:14.717388] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:17.467 [2024-07-15 20:19:14.717398] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.717402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.717406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb78ec0) 00:24:17.467 [2024-07-15 20:19:14.717414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:17.467 [2024-07-15 20:19:14.717427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfbe40, cid 0, qid 0 00:24:17.467 [2024-07-15 20:19:14.717614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.467 [2024-07-15 20:19:14.717620] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.467 [2024-07-15 20:19:14.717624] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.717628] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfbe40) on tqpair=0xb78ec0 00:24:17.467 [2024-07-15 20:19:14.717635] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.717639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.717642] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb78ec0) 00:24:17.467 [2024-07-15 20:19:14.717648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.467 [2024-07-15 20:19:14.717655] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.717658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.717662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb78ec0) 00:24:17.467 [2024-07-15 20:19:14.717667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.467 [2024-07-15 20:19:14.717673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.717677] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.717680] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb78ec0) 00:24:17.467 [2024-07-15 20:19:14.717686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.467 [2024-07-15 20:19:14.717692] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.717695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.717698] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb78ec0) 00:24:17.467 [2024-07-15 20:19:14.717704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.467 [2024-07-15 20:19:14.717709] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:17.467 [2024-07-15 20:19:14.717719] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:17.467 [2024-07-15 20:19:14.717726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.717729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb78ec0) 00:24:17.467 [2024-07-15 20:19:14.717736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.467 [2024-07-15 20:19:14.717747] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfbe40, cid 0, qid 0 00:24:17.467 [2024-07-15 20:19:14.717753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfbfc0, cid 1, qid 0 00:24:17.467 [2024-07-15 20:19:14.717757] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc140, cid 2, qid 0 00:24:17.467 [2024-07-15 20:19:14.717764] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc2c0, cid 3, qid 0 00:24:17.467 [2024-07-15 20:19:14.717769] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc440, cid 4, qid 0 00:24:17.467 [2024-07-15 20:19:14.718045] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.467 [2024-07-15 20:19:14.718052] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.467 [2024-07-15 20:19:14.718055] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.718059] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc440) on tqpair=0xb78ec0 00:24:17.467 [2024-07-15 20:19:14.718063] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:17.467 [2024-07-15 20:19:14.718068] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:17.467 [2024-07-15 20:19:14.718079] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.718083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb78ec0) 00:24:17.467 [2024-07-15 20:19:14.718089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.467 [2024-07-15 20:19:14.718099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc440, cid 4, qid 0 00:24:17.467 [2024-07-15 20:19:14.722133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.467 [2024-07-15 20:19:14.722141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.467 [2024-07-15 20:19:14.722145] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.722148] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb78ec0): datao=0, datal=4096, cccid=4 00:24:17.467 [2024-07-15 20:19:14.722153] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfc440) on tqpair(0xb78ec0): expected_datao=0, payload_size=4096 00:24:17.467 [2024-07-15 20:19:14.722157] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.722164] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.722167] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.722173] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.467 [2024-07-15 20:19:14.722179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.467 [2024-07-15 20:19:14.722182] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.722186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc440) on tqpair=0xb78ec0 00:24:17.467 [2024-07-15 20:19:14.722198] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:17.467 [2024-07-15 20:19:14.722219] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.467 [2024-07-15 20:19:14.722223] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb78ec0) 00:24:17.467 [2024-07-15 20:19:14.722230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.468 [2024-07-15 20:19:14.722237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.722240] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.722244] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb78ec0) 00:24:17.468 [2024-07-15 20:19:14.722250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.468 [2024-07-15 20:19:14.722265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc440, cid 4, qid 0 00:24:17.468 [2024-07-15 20:19:14.722270] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc5c0, cid 5, qid 0 00:24:17.468 [2024-07-15 20:19:14.722541] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.468 [2024-07-15 20:19:14.722550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.468 [2024-07-15 20:19:14.722554] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.722558] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb78ec0): datao=0, datal=1024, cccid=4 00:24:17.468 [2024-07-15 20:19:14.722562] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfc440) on tqpair(0xb78ec0): expected_datao=0, payload_size=1024 00:24:17.468 [2024-07-15 20:19:14.722566] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.722573] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.722576] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.722582] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.468 [2024-07-15 20:19:14.722587] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.468 [2024-07-15 20:19:14.722591] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.722594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc5c0) on tqpair=0xb78ec0 00:24:17.468 [2024-07-15 20:19:14.763331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.468 [2024-07-15 20:19:14.763342] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.468 [2024-07-15 20:19:14.763345] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.763349] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc440) on tqpair=0xb78ec0 00:24:17.468 [2024-07-15 20:19:14.763367] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.763371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb78ec0) 00:24:17.468 [2024-07-15 20:19:14.763378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.468 [2024-07-15 20:19:14.763394] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc440, cid 4, qid 0 00:24:17.468 [2024-07-15 20:19:14.763644] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.468 [2024-07-15 20:19:14.763650] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.468 [2024-07-15 20:19:14.763654] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.763657] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb78ec0): datao=0, datal=3072, cccid=4 00:24:17.468 [2024-07-15 20:19:14.763661] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfc440) on tqpair(0xb78ec0): expected_datao=0, payload_size=3072 00:24:17.468 [2024-07-15 20:19:14.763666] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.763672] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.763676] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.763861] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.468 [2024-07-15 20:19:14.763867] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.468 [2024-07-15 20:19:14.763870] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.763874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc440) on tqpair=0xb78ec0 00:24:17.468 [2024-07-15 20:19:14.763882] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.763886] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb78ec0) 00:24:17.468 [2024-07-15 20:19:14.763892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.468 [2024-07-15 20:19:14.763905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc440, cid 4, qid 0 00:24:17.468 [2024-07-15 20:19:14.764153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.468 [2024-07-15 20:19:14.764160] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.468 [2024-07-15 20:19:14.764166] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.764170] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb78ec0): datao=0, datal=8, cccid=4 00:24:17.468 [2024-07-15 20:19:14.764174] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfc440) on tqpair(0xb78ec0): expected_datao=0, payload_size=8 00:24:17.468 [2024-07-15 20:19:14.764179] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.764185] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.764189] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.809139] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.468 [2024-07-15 20:19:14.809148] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.468 [2024-07-15 20:19:14.809152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.468 [2024-07-15 20:19:14.809156] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc440) on tqpair=0xb78ec0 00:24:17.468 ===================================================== 00:24:17.468 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:17.468 ===================================================== 00:24:17.468 Controller Capabilities/Features 00:24:17.468 ================================ 00:24:17.468 Vendor ID: 0000 00:24:17.468 Subsystem Vendor ID: 0000 00:24:17.468 Serial Number: .................... 00:24:17.468 Model Number: ........................................ 00:24:17.468 Firmware Version: 24.09 00:24:17.468 Recommended Arb Burst: 0 00:24:17.468 IEEE OUI Identifier: 00 00 00 00:24:17.468 Multi-path I/O 00:24:17.468 May have multiple subsystem ports: No 00:24:17.468 May have multiple controllers: No 00:24:17.468 Associated with SR-IOV VF: No 00:24:17.468 Max Data Transfer Size: 131072 00:24:17.468 Max Number of Namespaces: 0 00:24:17.468 Max Number of I/O Queues: 1024 00:24:17.468 NVMe Specification Version (VS): 1.3 00:24:17.468 NVMe Specification Version (Identify): 1.3 00:24:17.468 Maximum Queue Entries: 128 00:24:17.468 Contiguous Queues Required: Yes 00:24:17.468 Arbitration Mechanisms Supported 00:24:17.468 Weighted Round Robin: Not Supported 00:24:17.468 Vendor Specific: Not Supported 00:24:17.468 Reset Timeout: 15000 ms 00:24:17.468 Doorbell Stride: 4 bytes 00:24:17.468 NVM Subsystem Reset: Not Supported 00:24:17.468 Command Sets Supported 00:24:17.468 NVM Command Set: Supported 00:24:17.468 Boot Partition: Not Supported 00:24:17.468 Memory Page Size Minimum: 4096 bytes 00:24:17.468 Memory Page Size Maximum: 4096 bytes 00:24:17.468 Persistent Memory Region: Not Supported 00:24:17.468 Optional Asynchronous Events Supported 00:24:17.468 Namespace Attribute Notices: Not Supported 00:24:17.468 Firmware Activation Notices: Not Supported 00:24:17.468 ANA Change Notices: Not Supported 00:24:17.468 PLE Aggregate Log Change Notices: Not Supported 00:24:17.468 LBA Status Info Alert Notices: Not Supported 00:24:17.468 EGE Aggregate Log Change Notices: Not Supported 00:24:17.468 Normal NVM Subsystem Shutdown event: Not Supported 00:24:17.468 Zone Descriptor Change Notices: Not Supported 00:24:17.468 Discovery Log Change Notices: Supported 00:24:17.468 Controller Attributes 00:24:17.468 128-bit Host Identifier: Not Supported 00:24:17.468 Non-Operational Permissive Mode: Not Supported 00:24:17.468 NVM Sets: Not Supported 00:24:17.468 Read Recovery Levels: Not Supported 00:24:17.468 Endurance Groups: Not Supported 00:24:17.468 Predictable Latency Mode: Not Supported 00:24:17.468 Traffic Based Keep ALive: Not Supported 00:24:17.468 Namespace Granularity: Not Supported 00:24:17.468 SQ Associations: Not Supported 00:24:17.468 UUID List: Not Supported 00:24:17.468 Multi-Domain Subsystem: Not Supported 00:24:17.468 Fixed Capacity Management: Not Supported 00:24:17.469 Variable Capacity Management: Not Supported 00:24:17.469 Delete Endurance Group: Not Supported 00:24:17.469 Delete NVM Set: Not Supported 00:24:17.469 Extended LBA Formats Supported: Not Supported 00:24:17.469 Flexible Data Placement Supported: Not Supported 00:24:17.469 00:24:17.469 Controller Memory Buffer Support 00:24:17.469 ================================ 00:24:17.469 Supported: No 00:24:17.469 00:24:17.469 Persistent Memory Region Support 00:24:17.469 ================================ 00:24:17.469 Supported: No 00:24:17.469 00:24:17.469 Admin Command Set Attributes 00:24:17.469 ============================ 00:24:17.469 Security Send/Receive: Not Supported 00:24:17.469 Format NVM: Not Supported 00:24:17.469 Firmware Activate/Download: Not Supported 00:24:17.469 Namespace Management: Not Supported 00:24:17.469 Device Self-Test: Not Supported 00:24:17.469 Directives: Not Supported 00:24:17.469 NVMe-MI: Not Supported 00:24:17.469 Virtualization Management: Not Supported 00:24:17.469 Doorbell Buffer Config: Not Supported 00:24:17.469 Get LBA Status Capability: Not Supported 00:24:17.469 Command & Feature Lockdown Capability: Not Supported 00:24:17.469 Abort Command Limit: 1 00:24:17.469 Async Event Request Limit: 4 00:24:17.469 Number of Firmware Slots: N/A 00:24:17.469 Firmware Slot 1 Read-Only: N/A 00:24:17.469 Firmware Activation Without Reset: N/A 00:24:17.469 Multiple Update Detection Support: N/A 00:24:17.469 Firmware Update Granularity: No Information Provided 00:24:17.469 Per-Namespace SMART Log: No 00:24:17.469 Asymmetric Namespace Access Log Page: Not Supported 00:24:17.469 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:17.469 Command Effects Log Page: Not Supported 00:24:17.469 Get Log Page Extended Data: Supported 00:24:17.469 Telemetry Log Pages: Not Supported 00:24:17.469 Persistent Event Log Pages: Not Supported 00:24:17.469 Supported Log Pages Log Page: May Support 00:24:17.469 Commands Supported & Effects Log Page: Not Supported 00:24:17.469 Feature Identifiers & Effects Log Page:May Support 00:24:17.469 NVMe-MI Commands & Effects Log Page: May Support 00:24:17.469 Data Area 4 for Telemetry Log: Not Supported 00:24:17.469 Error Log Page Entries Supported: 128 00:24:17.469 Keep Alive: Not Supported 00:24:17.469 00:24:17.469 NVM Command Set Attributes 00:24:17.469 ========================== 00:24:17.469 Submission Queue Entry Size 00:24:17.469 Max: 1 00:24:17.469 Min: 1 00:24:17.469 Completion Queue Entry Size 00:24:17.469 Max: 1 00:24:17.469 Min: 1 00:24:17.469 Number of Namespaces: 0 00:24:17.469 Compare Command: Not Supported 00:24:17.469 Write Uncorrectable Command: Not Supported 00:24:17.469 Dataset Management Command: Not Supported 00:24:17.469 Write Zeroes Command: Not Supported 00:24:17.469 Set Features Save Field: Not Supported 00:24:17.469 Reservations: Not Supported 00:24:17.469 Timestamp: Not Supported 00:24:17.469 Copy: Not Supported 00:24:17.469 Volatile Write Cache: Not Present 00:24:17.469 Atomic Write Unit (Normal): 1 00:24:17.469 Atomic Write Unit (PFail): 1 00:24:17.469 Atomic Compare & Write Unit: 1 00:24:17.469 Fused Compare & Write: Supported 00:24:17.469 Scatter-Gather List 00:24:17.469 SGL Command Set: Supported 00:24:17.469 SGL Keyed: Supported 00:24:17.469 SGL Bit Bucket Descriptor: Not Supported 00:24:17.469 SGL Metadata Pointer: Not Supported 00:24:17.469 Oversized SGL: Not Supported 00:24:17.469 SGL Metadata Address: Not Supported 00:24:17.469 SGL Offset: Supported 00:24:17.469 Transport SGL Data Block: Not Supported 00:24:17.469 Replay Protected Memory Block: Not Supported 00:24:17.469 00:24:17.469 Firmware Slot Information 00:24:17.469 ========================= 00:24:17.469 Active slot: 0 00:24:17.469 00:24:17.469 00:24:17.469 Error Log 00:24:17.469 ========= 00:24:17.469 00:24:17.469 Active Namespaces 00:24:17.469 ================= 00:24:17.469 Discovery Log Page 00:24:17.469 ================== 00:24:17.469 Generation Counter: 2 00:24:17.469 Number of Records: 2 00:24:17.469 Record Format: 0 00:24:17.469 00:24:17.469 Discovery Log Entry 0 00:24:17.469 ---------------------- 00:24:17.469 Transport Type: 3 (TCP) 00:24:17.469 Address Family: 1 (IPv4) 00:24:17.469 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:17.469 Entry Flags: 00:24:17.469 Duplicate Returned Information: 1 00:24:17.469 Explicit Persistent Connection Support for Discovery: 1 00:24:17.469 Transport Requirements: 00:24:17.469 Secure Channel: Not Required 00:24:17.469 Port ID: 0 (0x0000) 00:24:17.469 Controller ID: 65535 (0xffff) 00:24:17.469 Admin Max SQ Size: 128 00:24:17.469 Transport Service Identifier: 4420 00:24:17.469 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:17.469 Transport Address: 10.0.0.2 00:24:17.469 Discovery Log Entry 1 00:24:17.469 ---------------------- 00:24:17.469 Transport Type: 3 (TCP) 00:24:17.469 Address Family: 1 (IPv4) 00:24:17.469 Subsystem Type: 2 (NVM Subsystem) 00:24:17.469 Entry Flags: 00:24:17.469 Duplicate Returned Information: 0 00:24:17.469 Explicit Persistent Connection Support for Discovery: 0 00:24:17.469 Transport Requirements: 00:24:17.469 Secure Channel: Not Required 00:24:17.469 Port ID: 0 (0x0000) 00:24:17.469 Controller ID: 65535 (0xffff) 00:24:17.469 Admin Max SQ Size: 128 00:24:17.469 Transport Service Identifier: 4420 00:24:17.469 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:17.469 Transport Address: 10.0.0.2 [2024-07-15 20:19:14.809239] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:17.469 [2024-07-15 20:19:14.809249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfbe40) on tqpair=0xb78ec0 00:24:17.469 [2024-07-15 20:19:14.809256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.469 [2024-07-15 20:19:14.809262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfbfc0) on tqpair=0xb78ec0 00:24:17.469 [2024-07-15 20:19:14.809266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.469 [2024-07-15 20:19:14.809271] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc140) on tqpair=0xb78ec0 00:24:17.469 [2024-07-15 20:19:14.809275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.469 [2024-07-15 20:19:14.809280] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc2c0) on tqpair=0xb78ec0 00:24:17.469 [2024-07-15 20:19:14.809285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.469 [2024-07-15 20:19:14.809295] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.469 [2024-07-15 20:19:14.809299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.469 [2024-07-15 20:19:14.809303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb78ec0) 00:24:17.469 [2024-07-15 20:19:14.809310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.469 [2024-07-15 20:19:14.809323] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc2c0, cid 3, qid 0 00:24:17.469 [2024-07-15 20:19:14.809561] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.469 [2024-07-15 20:19:14.809567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.469 [2024-07-15 20:19:14.809571] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.469 [2024-07-15 20:19:14.809574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc2c0) on tqpair=0xb78ec0 00:24:17.469 [2024-07-15 20:19:14.809581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.469 [2024-07-15 20:19:14.809585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.809589] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb78ec0) 00:24:17.470 [2024-07-15 20:19:14.809595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.470 [2024-07-15 20:19:14.809608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc2c0, cid 3, qid 0 00:24:17.470 [2024-07-15 20:19:14.809828] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.470 [2024-07-15 20:19:14.809834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.470 [2024-07-15 20:19:14.809840] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.809844] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc2c0) on tqpair=0xb78ec0 00:24:17.470 [2024-07-15 20:19:14.809849] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:17.470 [2024-07-15 20:19:14.809853] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:17.470 [2024-07-15 20:19:14.809862] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.809866] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.809869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb78ec0) 00:24:17.470 [2024-07-15 20:19:14.809876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.470 [2024-07-15 20:19:14.809886] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc2c0, cid 3, qid 0 00:24:17.470 [2024-07-15 20:19:14.810118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.470 [2024-07-15 20:19:14.810129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.470 [2024-07-15 20:19:14.810133] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.810137] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc2c0) on tqpair=0xb78ec0 00:24:17.470 [2024-07-15 20:19:14.810147] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.810151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.810154] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb78ec0) 00:24:17.470 [2024-07-15 20:19:14.810161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.470 [2024-07-15 20:19:14.810171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc2c0, cid 3, qid 0 00:24:17.470 [2024-07-15 20:19:14.810403] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.470 [2024-07-15 20:19:14.810410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.470 [2024-07-15 20:19:14.810413] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.810417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc2c0) on tqpair=0xb78ec0 00:24:17.470 [2024-07-15 20:19:14.810426] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.810430] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.810433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb78ec0) 00:24:17.470 [2024-07-15 20:19:14.810440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.470 [2024-07-15 20:19:14.810449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc2c0, cid 3, qid 0 00:24:17.470 [2024-07-15 20:19:14.810681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.470 [2024-07-15 20:19:14.810687] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.470 [2024-07-15 20:19:14.810691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.810695] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc2c0) on tqpair=0xb78ec0 00:24:17.470 [2024-07-15 20:19:14.810704] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.810708] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.810711] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb78ec0) 00:24:17.470 [2024-07-15 20:19:14.810718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.470 [2024-07-15 20:19:14.810727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc2c0, cid 3, qid 0 00:24:17.470 [2024-07-15 20:19:14.810930] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.470 [2024-07-15 20:19:14.810937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.470 [2024-07-15 20:19:14.810940] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.810944] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc2c0) on tqpair=0xb78ec0 00:24:17.470 [2024-07-15 20:19:14.810953] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.810957] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.810960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb78ec0) 00:24:17.470 [2024-07-15 20:19:14.810967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.470 [2024-07-15 20:19:14.810976] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc2c0, cid 3, qid 0 00:24:17.470 [2024-07-15 20:19:14.811173] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.470 [2024-07-15 20:19:14.811180] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.470 [2024-07-15 20:19:14.811183] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.811187] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc2c0) on tqpair=0xb78ec0 00:24:17.470 [2024-07-15 20:19:14.811196] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.811200] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.811204] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb78ec0) 00:24:17.470 [2024-07-15 20:19:14.811210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.470 [2024-07-15 20:19:14.811220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc2c0, cid 3, qid 0 00:24:17.470 [2024-07-15 20:19:14.811432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.470 [2024-07-15 20:19:14.811439] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.470 [2024-07-15 20:19:14.811442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.811446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc2c0) on tqpair=0xb78ec0 00:24:17.470 [2024-07-15 20:19:14.811455] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.811459] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.811462] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb78ec0) 00:24:17.470 [2024-07-15 20:19:14.811469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.470 [2024-07-15 20:19:14.811478] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc2c0, cid 3, qid 0 00:24:17.470 [2024-07-15 20:19:14.811712] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.470 [2024-07-15 20:19:14.811718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.470 [2024-07-15 20:19:14.811721] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.811725] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc2c0) on tqpair=0xb78ec0 00:24:17.470 [2024-07-15 20:19:14.811734] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.811738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.811741] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb78ec0) 00:24:17.470 [2024-07-15 20:19:14.811748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.470 [2024-07-15 20:19:14.811757] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc2c0, cid 3, qid 0 00:24:17.470 [2024-07-15 20:19:14.811991] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.470 [2024-07-15 20:19:14.811999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.470 [2024-07-15 20:19:14.812002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.812006] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc2c0) on tqpair=0xb78ec0 00:24:17.470 [2024-07-15 20:19:14.812015] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.812019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.812022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb78ec0) 00:24:17.470 [2024-07-15 20:19:14.812029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.470 [2024-07-15 20:19:14.812038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc2c0, cid 3, qid 0 00:24:17.470 [2024-07-15 20:19:14.812243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.470 [2024-07-15 20:19:14.812250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.470 [2024-07-15 20:19:14.812253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.812257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc2c0) on tqpair=0xb78ec0 00:24:17.470 [2024-07-15 20:19:14.812266] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.812270] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.470 [2024-07-15 20:19:14.812273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb78ec0) 00:24:17.470 [2024-07-15 20:19:14.812280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.470 [2024-07-15 20:19:14.812290] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc2c0, cid 3, qid 0 00:24:17.470 [2024-07-15 20:19:14.812476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.470 [2024-07-15 20:19:14.812482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.470 [2024-07-15 20:19:14.812485] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.471 [2024-07-15 20:19:14.812489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc2c0) on tqpair=0xb78ec0 00:24:17.471 [2024-07-15 20:19:14.812498] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.471 [2024-07-15 20:19:14.812502] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.471 [2024-07-15 20:19:14.812505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb78ec0) 00:24:17.471 [2024-07-15 20:19:14.812512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.471 [2024-07-15 20:19:14.812521] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc2c0, cid 3, qid 0 00:24:17.471 [2024-07-15 20:19:14.812719] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.471 [2024-07-15 20:19:14.812725] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.471 [2024-07-15 20:19:14.812728] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.471 [2024-07-15 20:19:14.812732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc2c0) on tqpair=0xb78ec0 00:24:17.471 [2024-07-15 20:19:14.812741] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.471 [2024-07-15 20:19:14.812745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.471 [2024-07-15 20:19:14.812748] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb78ec0) 00:24:17.471 [2024-07-15 20:19:14.812755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.471 [2024-07-15 20:19:14.812765] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc2c0, cid 3, qid 0 00:24:17.471 [2024-07-15 20:19:14.812955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.471 [2024-07-15 20:19:14.812962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.471 [2024-07-15 20:19:14.812969] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.471 [2024-07-15 20:19:14.812973] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc2c0) on tqpair=0xb78ec0 00:24:17.471 [2024-07-15 20:19:14.812983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.471 [2024-07-15 20:19:14.812986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.471 [2024-07-15 20:19:14.812990] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb78ec0) 00:24:17.471 [2024-07-15 20:19:14.812996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.471 [2024-07-15 20:19:14.813006] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc2c0, cid 3, qid 0 00:24:17.471 [2024-07-15 20:19:14.817132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.471 [2024-07-15 20:19:14.817140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.471 [2024-07-15 20:19:14.817144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.471 [2024-07-15 20:19:14.817148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc2c0) on tqpair=0xb78ec0 00:24:17.471 [2024-07-15 20:19:14.817157] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.471 [2024-07-15 20:19:14.817161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.471 [2024-07-15 20:19:14.817164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb78ec0) 00:24:17.471 [2024-07-15 20:19:14.817171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.471 [2024-07-15 20:19:14.817183] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfc2c0, cid 3, qid 0 00:24:17.471 [2024-07-15 20:19:14.817391] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.471 [2024-07-15 20:19:14.817397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.471 [2024-07-15 20:19:14.817401] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.471 [2024-07-15 20:19:14.817404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfc2c0) on tqpair=0xb78ec0 00:24:17.471 [2024-07-15 20:19:14.817411] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:24:17.471 00:24:17.471 20:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:17.471 [2024-07-15 20:19:14.860707] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:24:17.471 [2024-07-15 20:19:14.860747] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084726 ] 00:24:17.471 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.471 [2024-07-15 20:19:14.891670] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:17.471 [2024-07-15 20:19:14.891712] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:17.471 [2024-07-15 20:19:14.891717] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:17.471 [2024-07-15 20:19:14.891729] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:17.471 [2024-07-15 20:19:14.891734] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:17.733 [2024-07-15 20:19:14.895154] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:17.733 [2024-07-15 20:19:14.895178] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2276ec0 0 00:24:17.733 [2024-07-15 20:19:14.903131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:17.733 [2024-07-15 20:19:14.903143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:17.733 [2024-07-15 20:19:14.903147] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:17.733 [2024-07-15 20:19:14.903150] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:17.733 [2024-07-15 20:19:14.903181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.733 [2024-07-15 20:19:14.903187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.733 [2024-07-15 20:19:14.903191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2276ec0) 00:24:17.733 [2024-07-15 20:19:14.903203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:17.733 [2024-07-15 20:19:14.903220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9e40, cid 0, qid 0 00:24:17.733 [2024-07-15 20:19:14.911134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.733 [2024-07-15 20:19:14.911143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.733 [2024-07-15 20:19:14.911147] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.733 [2024-07-15 20:19:14.911151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f9e40) on tqpair=0x2276ec0 00:24:17.733 [2024-07-15 20:19:14.911162] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:17.733 [2024-07-15 20:19:14.911169] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:17.733 [2024-07-15 20:19:14.911182] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:17.734 [2024-07-15 20:19:14.911194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.911198] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.911201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2276ec0) 00:24:17.734 [2024-07-15 20:19:14.911209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.734 [2024-07-15 20:19:14.911222] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9e40, cid 0, qid 0 00:24:17.734 [2024-07-15 20:19:14.911435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.734 [2024-07-15 20:19:14.911442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.734 [2024-07-15 20:19:14.911445] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.911449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f9e40) on tqpair=0x2276ec0 00:24:17.734 [2024-07-15 20:19:14.911454] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:17.734 [2024-07-15 20:19:14.911461] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:17.734 [2024-07-15 20:19:14.911468] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.911472] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.911475] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2276ec0) 00:24:17.734 [2024-07-15 20:19:14.911482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.734 [2024-07-15 20:19:14.911493] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9e40, cid 0, qid 0 00:24:17.734 [2024-07-15 20:19:14.911716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.734 [2024-07-15 20:19:14.911722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.734 [2024-07-15 20:19:14.911726] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.911730] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f9e40) on tqpair=0x2276ec0 00:24:17.734 [2024-07-15 20:19:14.911738] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:17.734 [2024-07-15 20:19:14.911746] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:17.734 [2024-07-15 20:19:14.911752] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.911756] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.911760] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2276ec0) 00:24:17.734 [2024-07-15 20:19:14.911766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.734 [2024-07-15 20:19:14.911777] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9e40, cid 0, qid 0 00:24:17.734 [2024-07-15 20:19:14.911992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.734 [2024-07-15 20:19:14.911999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.734 [2024-07-15 20:19:14.912002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.912006] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f9e40) on tqpair=0x2276ec0 00:24:17.734 [2024-07-15 20:19:14.912011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:17.734 [2024-07-15 20:19:14.912020] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.912024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.912027] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2276ec0) 00:24:17.734 [2024-07-15 20:19:14.912034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.734 [2024-07-15 20:19:14.912044] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9e40, cid 0, qid 0 00:24:17.734 [2024-07-15 20:19:14.912259] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.734 [2024-07-15 20:19:14.912266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.734 [2024-07-15 20:19:14.912270] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.912273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f9e40) on tqpair=0x2276ec0 00:24:17.734 [2024-07-15 20:19:14.912278] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:17.734 [2024-07-15 20:19:14.912283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:17.734 [2024-07-15 20:19:14.912290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:17.734 [2024-07-15 20:19:14.912395] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:17.734 [2024-07-15 20:19:14.912399] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:17.734 [2024-07-15 20:19:14.912407] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.912410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.912414] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2276ec0) 00:24:17.734 [2024-07-15 20:19:14.912420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.734 [2024-07-15 20:19:14.912431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9e40, cid 0, qid 0 00:24:17.734 [2024-07-15 20:19:14.912645] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.734 [2024-07-15 20:19:14.912651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.734 [2024-07-15 20:19:14.912657] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.912661] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f9e40) on tqpair=0x2276ec0 00:24:17.734 [2024-07-15 20:19:14.912665] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:17.734 [2024-07-15 20:19:14.912675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.912679] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.912682] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2276ec0) 00:24:17.734 [2024-07-15 20:19:14.912689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.734 [2024-07-15 20:19:14.912699] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9e40, cid 0, qid 0 00:24:17.734 [2024-07-15 20:19:14.912918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.734 [2024-07-15 20:19:14.912924] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.734 [2024-07-15 20:19:14.912927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.912931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f9e40) on tqpair=0x2276ec0 00:24:17.734 [2024-07-15 20:19:14.912936] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:17.734 [2024-07-15 20:19:14.912940] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:17.734 [2024-07-15 20:19:14.912948] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:17.734 [2024-07-15 20:19:14.912956] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:17.734 [2024-07-15 20:19:14.912964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.734 [2024-07-15 20:19:14.912968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2276ec0) 00:24:17.734 [2024-07-15 20:19:14.912975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.734 [2024-07-15 20:19:14.912985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9e40, cid 0, qid 0 00:24:17.735 [2024-07-15 20:19:14.913236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.735 [2024-07-15 20:19:14.913243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.735 [2024-07-15 20:19:14.913247] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.913250] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2276ec0): datao=0, datal=4096, cccid=0 00:24:17.735 [2024-07-15 20:19:14.913255] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f9e40) on tqpair(0x2276ec0): expected_datao=0, payload_size=4096 00:24:17.735 [2024-07-15 20:19:14.913259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.913267] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.913270] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.913404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.735 [2024-07-15 20:19:14.913410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.735 [2024-07-15 20:19:14.913413] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.913417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f9e40) on tqpair=0x2276ec0 00:24:17.735 [2024-07-15 20:19:14.913424] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:17.735 [2024-07-15 20:19:14.913432] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:17.735 [2024-07-15 20:19:14.913438] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:17.735 [2024-07-15 20:19:14.913442] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:17.735 [2024-07-15 20:19:14.913446] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:17.735 [2024-07-15 20:19:14.913451] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:17.735 [2024-07-15 20:19:14.913459] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:17.735 [2024-07-15 20:19:14.913466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.913470] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.913473] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2276ec0) 00:24:17.735 [2024-07-15 20:19:14.913480] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:17.735 [2024-07-15 20:19:14.913492] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9e40, cid 0, qid 0 00:24:17.735 [2024-07-15 20:19:14.913715] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.735 [2024-07-15 20:19:14.913721] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.735 [2024-07-15 20:19:14.913725] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.913728] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f9e40) on tqpair=0x2276ec0 00:24:17.735 [2024-07-15 20:19:14.913735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.913739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.913742] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2276ec0) 00:24:17.735 [2024-07-15 20:19:14.913748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.735 [2024-07-15 20:19:14.913754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.913758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.913761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2276ec0) 00:24:17.735 [2024-07-15 20:19:14.913767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.735 [2024-07-15 20:19:14.913773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.913776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.913780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2276ec0) 00:24:17.735 [2024-07-15 20:19:14.913785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.735 [2024-07-15 20:19:14.913791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.913795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.913798] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2276ec0) 00:24:17.735 [2024-07-15 20:19:14.913804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.735 [2024-07-15 20:19:14.913808] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:17.735 [2024-07-15 20:19:14.913818] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:17.735 [2024-07-15 20:19:14.913826] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.913830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2276ec0) 00:24:17.735 [2024-07-15 20:19:14.913837] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.735 [2024-07-15 20:19:14.913848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9e40, cid 0, qid 0 00:24:17.735 [2024-07-15 20:19:14.913854] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9fc0, cid 1, qid 0 00:24:17.735 [2024-07-15 20:19:14.913858] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa140, cid 2, qid 0 00:24:17.735 [2024-07-15 20:19:14.913863] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa2c0, cid 3, qid 0 00:24:17.735 [2024-07-15 20:19:14.913868] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa440, cid 4, qid 0 00:24:17.735 [2024-07-15 20:19:14.914104] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.735 [2024-07-15 20:19:14.914110] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.735 [2024-07-15 20:19:14.914114] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.914117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa440) on tqpair=0x2276ec0 00:24:17.735 [2024-07-15 20:19:14.914127] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:17.735 [2024-07-15 20:19:14.914133] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:17.735 [2024-07-15 20:19:14.914140] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:17.735 [2024-07-15 20:19:14.914147] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:17.735 [2024-07-15 20:19:14.914153] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.914156] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.914160] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2276ec0) 00:24:17.735 [2024-07-15 20:19:14.914166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:17.735 [2024-07-15 20:19:14.914177] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa440, cid 4, qid 0 00:24:17.735 [2024-07-15 20:19:14.914387] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.735 [2024-07-15 20:19:14.914393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.735 [2024-07-15 20:19:14.914397] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.914401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa440) on tqpair=0x2276ec0 00:24:17.735 [2024-07-15 20:19:14.914463] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:17.735 [2024-07-15 20:19:14.914472] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:17.735 [2024-07-15 20:19:14.914479] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.914483] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2276ec0) 00:24:17.735 [2024-07-15 20:19:14.914489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.735 [2024-07-15 20:19:14.914499] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa440, cid 4, qid 0 00:24:17.735 [2024-07-15 20:19:14.914704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.735 [2024-07-15 20:19:14.914710] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.735 [2024-07-15 20:19:14.914716] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.914719] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2276ec0): datao=0, datal=4096, cccid=4 00:24:17.735 [2024-07-15 20:19:14.914724] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fa440) on tqpair(0x2276ec0): expected_datao=0, payload_size=4096 00:24:17.735 [2024-07-15 20:19:14.914728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.914780] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.914783] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.735 [2024-07-15 20:19:14.914963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.735 [2024-07-15 20:19:14.914969] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.735 [2024-07-15 20:19:14.914972] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.914976] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa440) on tqpair=0x2276ec0 00:24:17.736 [2024-07-15 20:19:14.914985] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:17.736 [2024-07-15 20:19:14.914998] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:17.736 [2024-07-15 20:19:14.915007] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:17.736 [2024-07-15 20:19:14.915014] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.915018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2276ec0) 00:24:17.736 [2024-07-15 20:19:14.915024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.736 [2024-07-15 20:19:14.915035] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa440, cid 4, qid 0 00:24:17.736 [2024-07-15 20:19:14.919132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.736 [2024-07-15 20:19:14.919140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.736 [2024-07-15 20:19:14.919144] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.919147] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2276ec0): datao=0, datal=4096, cccid=4 00:24:17.736 [2024-07-15 20:19:14.919152] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fa440) on tqpair(0x2276ec0): expected_datao=0, payload_size=4096 00:24:17.736 [2024-07-15 20:19:14.919156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.919162] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.919166] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.919171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.736 [2024-07-15 20:19:14.919177] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.736 [2024-07-15 20:19:14.919181] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.919184] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa440) on tqpair=0x2276ec0 00:24:17.736 [2024-07-15 20:19:14.919197] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:17.736 [2024-07-15 20:19:14.919206] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:17.736 [2024-07-15 20:19:14.919213] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.919217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2276ec0) 00:24:17.736 [2024-07-15 20:19:14.919223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.736 [2024-07-15 20:19:14.919240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa440, cid 4, qid 0 00:24:17.736 [2024-07-15 20:19:14.919440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.736 [2024-07-15 20:19:14.919446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.736 [2024-07-15 20:19:14.919450] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.919453] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2276ec0): datao=0, datal=4096, cccid=4 00:24:17.736 [2024-07-15 20:19:14.919457] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fa440) on tqpair(0x2276ec0): expected_datao=0, payload_size=4096 00:24:17.736 [2024-07-15 20:19:14.919462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.919508] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.919512] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.919717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.736 [2024-07-15 20:19:14.919723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.736 [2024-07-15 20:19:14.919726] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.919730] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa440) on tqpair=0x2276ec0 00:24:17.736 [2024-07-15 20:19:14.919738] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:17.736 [2024-07-15 20:19:14.919746] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:17.736 [2024-07-15 20:19:14.919754] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:17.736 [2024-07-15 20:19:14.919760] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:17.736 [2024-07-15 20:19:14.919765] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:17.736 [2024-07-15 20:19:14.919770] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:17.736 [2024-07-15 20:19:14.919774] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:17.736 [2024-07-15 20:19:14.919779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:17.736 [2024-07-15 20:19:14.919784] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:17.736 [2024-07-15 20:19:14.919797] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.919801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2276ec0) 00:24:17.736 [2024-07-15 20:19:14.919807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.736 [2024-07-15 20:19:14.919814] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.919817] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.919821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2276ec0) 00:24:17.736 [2024-07-15 20:19:14.919827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.736 [2024-07-15 20:19:14.919840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa440, cid 4, qid 0 00:24:17.736 [2024-07-15 20:19:14.919846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa5c0, cid 5, qid 0 00:24:17.736 [2024-07-15 20:19:14.920085] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.736 [2024-07-15 20:19:14.920093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.736 [2024-07-15 20:19:14.920096] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.920100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa440) on tqpair=0x2276ec0 00:24:17.736 [2024-07-15 20:19:14.920107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.736 [2024-07-15 20:19:14.920112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.736 [2024-07-15 20:19:14.920116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.920120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa5c0) on tqpair=0x2276ec0 00:24:17.736 [2024-07-15 20:19:14.920136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.920140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2276ec0) 00:24:17.736 [2024-07-15 20:19:14.920146] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.736 [2024-07-15 20:19:14.920157] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa5c0, cid 5, qid 0 00:24:17.736 [2024-07-15 20:19:14.920398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.736 [2024-07-15 20:19:14.920404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.736 [2024-07-15 20:19:14.920407] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.920411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa5c0) on tqpair=0x2276ec0 00:24:17.736 [2024-07-15 20:19:14.920420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.736 [2024-07-15 20:19:14.920424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2276ec0) 00:24:17.737 [2024-07-15 20:19:14.920430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.737 [2024-07-15 20:19:14.920440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa5c0, cid 5, qid 0 00:24:17.737 [2024-07-15 20:19:14.920617] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.737 [2024-07-15 20:19:14.920623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.737 [2024-07-15 20:19:14.920627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.920630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa5c0) on tqpair=0x2276ec0 00:24:17.737 [2024-07-15 20:19:14.920639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.920643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2276ec0) 00:24:17.737 [2024-07-15 20:19:14.920649] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.737 [2024-07-15 20:19:14.920659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa5c0, cid 5, qid 0 00:24:17.737 [2024-07-15 20:19:14.920836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.737 [2024-07-15 20:19:14.920842] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.737 [2024-07-15 20:19:14.920846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.920849] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa5c0) on tqpair=0x2276ec0 00:24:17.737 [2024-07-15 20:19:14.920864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.920868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2276ec0) 00:24:17.737 [2024-07-15 20:19:14.920874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.737 [2024-07-15 20:19:14.920881] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.920885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2276ec0) 00:24:17.737 [2024-07-15 20:19:14.920893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.737 [2024-07-15 20:19:14.920900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.920904] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2276ec0) 00:24:17.737 [2024-07-15 20:19:14.920910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.737 [2024-07-15 20:19:14.920918] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.920921] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2276ec0) 00:24:17.737 [2024-07-15 20:19:14.920927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.737 [2024-07-15 20:19:14.920939] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa5c0, cid 5, qid 0 00:24:17.737 [2024-07-15 20:19:14.920944] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa440, cid 4, qid 0 00:24:17.737 [2024-07-15 20:19:14.920948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa740, cid 6, qid 0 00:24:17.737 [2024-07-15 20:19:14.920953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa8c0, cid 7, qid 0 00:24:17.737 [2024-07-15 20:19:14.921218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.737 [2024-07-15 20:19:14.921225] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.737 [2024-07-15 20:19:14.921228] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921232] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2276ec0): datao=0, datal=8192, cccid=5 00:24:17.737 [2024-07-15 20:19:14.921236] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fa5c0) on tqpair(0x2276ec0): expected_datao=0, payload_size=8192 00:24:17.737 [2024-07-15 20:19:14.921241] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921345] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921350] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.737 [2024-07-15 20:19:14.921361] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.737 [2024-07-15 20:19:14.921365] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921368] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2276ec0): datao=0, datal=512, cccid=4 00:24:17.737 [2024-07-15 20:19:14.921372] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fa440) on tqpair(0x2276ec0): expected_datao=0, payload_size=512 00:24:17.737 [2024-07-15 20:19:14.921377] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921383] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921386] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.737 [2024-07-15 20:19:14.921398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.737 [2024-07-15 20:19:14.921401] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921404] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2276ec0): datao=0, datal=512, cccid=6 00:24:17.737 [2024-07-15 20:19:14.921409] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fa740) on tqpair(0x2276ec0): expected_datao=0, payload_size=512 00:24:17.737 [2024-07-15 20:19:14.921413] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921419] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921425] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.737 [2024-07-15 20:19:14.921436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.737 [2024-07-15 20:19:14.921440] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921443] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2276ec0): datao=0, datal=4096, cccid=7 00:24:17.737 [2024-07-15 20:19:14.921447] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22fa8c0) on tqpair(0x2276ec0): expected_datao=0, payload_size=4096 00:24:17.737 [2024-07-15 20:19:14.921451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921462] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921466] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921630] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.737 [2024-07-15 20:19:14.921636] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.737 [2024-07-15 20:19:14.921640] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921643] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa5c0) on tqpair=0x2276ec0 00:24:17.737 [2024-07-15 20:19:14.921656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.737 [2024-07-15 20:19:14.921662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.737 [2024-07-15 20:19:14.921665] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa440) on tqpair=0x2276ec0 00:24:17.737 [2024-07-15 20:19:14.921679] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.737 [2024-07-15 20:19:14.921685] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.737 [2024-07-15 20:19:14.921688] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921692] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa740) on tqpair=0x2276ec0 00:24:17.737 [2024-07-15 20:19:14.921699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.737 [2024-07-15 20:19:14.921705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.737 [2024-07-15 20:19:14.921708] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.737 [2024-07-15 20:19:14.921712] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa8c0) on tqpair=0x2276ec0 00:24:17.737 ===================================================== 00:24:17.737 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:17.737 ===================================================== 00:24:17.737 Controller Capabilities/Features 00:24:17.737 ================================ 00:24:17.737 Vendor ID: 8086 00:24:17.737 Subsystem Vendor ID: 8086 00:24:17.737 Serial Number: SPDK00000000000001 00:24:17.737 Model Number: SPDK bdev Controller 00:24:17.737 Firmware Version: 24.09 00:24:17.737 Recommended Arb Burst: 6 00:24:17.737 IEEE OUI Identifier: e4 d2 5c 00:24:17.737 Multi-path I/O 00:24:17.737 May have multiple subsystem ports: Yes 00:24:17.737 May have multiple controllers: Yes 00:24:17.737 Associated with SR-IOV VF: No 00:24:17.737 Max Data Transfer Size: 131072 00:24:17.737 Max Number of Namespaces: 32 00:24:17.737 Max Number of I/O Queues: 127 00:24:17.737 NVMe Specification Version (VS): 1.3 00:24:17.737 NVMe Specification Version (Identify): 1.3 00:24:17.738 Maximum Queue Entries: 128 00:24:17.738 Contiguous Queues Required: Yes 00:24:17.738 Arbitration Mechanisms Supported 00:24:17.738 Weighted Round Robin: Not Supported 00:24:17.738 Vendor Specific: Not Supported 00:24:17.738 Reset Timeout: 15000 ms 00:24:17.738 Doorbell Stride: 4 bytes 00:24:17.738 NVM Subsystem Reset: Not Supported 00:24:17.738 Command Sets Supported 00:24:17.738 NVM Command Set: Supported 00:24:17.738 Boot Partition: Not Supported 00:24:17.738 Memory Page Size Minimum: 4096 bytes 00:24:17.738 Memory Page Size Maximum: 4096 bytes 00:24:17.738 Persistent Memory Region: Not Supported 00:24:17.738 Optional Asynchronous Events Supported 00:24:17.738 Namespace Attribute Notices: Supported 00:24:17.738 Firmware Activation Notices: Not Supported 00:24:17.738 ANA Change Notices: Not Supported 00:24:17.738 PLE Aggregate Log Change Notices: Not Supported 00:24:17.738 LBA Status Info Alert Notices: Not Supported 00:24:17.738 EGE Aggregate Log Change Notices: Not Supported 00:24:17.738 Normal NVM Subsystem Shutdown event: Not Supported 00:24:17.738 Zone Descriptor Change Notices: Not Supported 00:24:17.738 Discovery Log Change Notices: Not Supported 00:24:17.738 Controller Attributes 00:24:17.738 128-bit Host Identifier: Supported 00:24:17.738 Non-Operational Permissive Mode: Not Supported 00:24:17.738 NVM Sets: Not Supported 00:24:17.738 Read Recovery Levels: Not Supported 00:24:17.738 Endurance Groups: Not Supported 00:24:17.738 Predictable Latency Mode: Not Supported 00:24:17.738 Traffic Based Keep ALive: Not Supported 00:24:17.738 Namespace Granularity: Not Supported 00:24:17.738 SQ Associations: Not Supported 00:24:17.738 UUID List: Not Supported 00:24:17.738 Multi-Domain Subsystem: Not Supported 00:24:17.738 Fixed Capacity Management: Not Supported 00:24:17.738 Variable Capacity Management: Not Supported 00:24:17.738 Delete Endurance Group: Not Supported 00:24:17.738 Delete NVM Set: Not Supported 00:24:17.738 Extended LBA Formats Supported: Not Supported 00:24:17.738 Flexible Data Placement Supported: Not Supported 00:24:17.738 00:24:17.738 Controller Memory Buffer Support 00:24:17.738 ================================ 00:24:17.738 Supported: No 00:24:17.738 00:24:17.738 Persistent Memory Region Support 00:24:17.738 ================================ 00:24:17.738 Supported: No 00:24:17.738 00:24:17.738 Admin Command Set Attributes 00:24:17.738 ============================ 00:24:17.738 Security Send/Receive: Not Supported 00:24:17.738 Format NVM: Not Supported 00:24:17.738 Firmware Activate/Download: Not Supported 00:24:17.738 Namespace Management: Not Supported 00:24:17.738 Device Self-Test: Not Supported 00:24:17.738 Directives: Not Supported 00:24:17.738 NVMe-MI: Not Supported 00:24:17.738 Virtualization Management: Not Supported 00:24:17.738 Doorbell Buffer Config: Not Supported 00:24:17.738 Get LBA Status Capability: Not Supported 00:24:17.738 Command & Feature Lockdown Capability: Not Supported 00:24:17.738 Abort Command Limit: 4 00:24:17.738 Async Event Request Limit: 4 00:24:17.738 Number of Firmware Slots: N/A 00:24:17.738 Firmware Slot 1 Read-Only: N/A 00:24:17.738 Firmware Activation Without Reset: N/A 00:24:17.738 Multiple Update Detection Support: N/A 00:24:17.738 Firmware Update Granularity: No Information Provided 00:24:17.738 Per-Namespace SMART Log: No 00:24:17.738 Asymmetric Namespace Access Log Page: Not Supported 00:24:17.738 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:17.738 Command Effects Log Page: Supported 00:24:17.738 Get Log Page Extended Data: Supported 00:24:17.738 Telemetry Log Pages: Not Supported 00:24:17.738 Persistent Event Log Pages: Not Supported 00:24:17.738 Supported Log Pages Log Page: May Support 00:24:17.738 Commands Supported & Effects Log Page: Not Supported 00:24:17.738 Feature Identifiers & Effects Log Page:May Support 00:24:17.738 NVMe-MI Commands & Effects Log Page: May Support 00:24:17.738 Data Area 4 for Telemetry Log: Not Supported 00:24:17.738 Error Log Page Entries Supported: 128 00:24:17.738 Keep Alive: Supported 00:24:17.738 Keep Alive Granularity: 10000 ms 00:24:17.738 00:24:17.738 NVM Command Set Attributes 00:24:17.738 ========================== 00:24:17.738 Submission Queue Entry Size 00:24:17.738 Max: 64 00:24:17.738 Min: 64 00:24:17.738 Completion Queue Entry Size 00:24:17.738 Max: 16 00:24:17.738 Min: 16 00:24:17.738 Number of Namespaces: 32 00:24:17.738 Compare Command: Supported 00:24:17.738 Write Uncorrectable Command: Not Supported 00:24:17.738 Dataset Management Command: Supported 00:24:17.738 Write Zeroes Command: Supported 00:24:17.738 Set Features Save Field: Not Supported 00:24:17.738 Reservations: Supported 00:24:17.738 Timestamp: Not Supported 00:24:17.738 Copy: Supported 00:24:17.738 Volatile Write Cache: Present 00:24:17.738 Atomic Write Unit (Normal): 1 00:24:17.738 Atomic Write Unit (PFail): 1 00:24:17.738 Atomic Compare & Write Unit: 1 00:24:17.738 Fused Compare & Write: Supported 00:24:17.738 Scatter-Gather List 00:24:17.738 SGL Command Set: Supported 00:24:17.738 SGL Keyed: Supported 00:24:17.738 SGL Bit Bucket Descriptor: Not Supported 00:24:17.738 SGL Metadata Pointer: Not Supported 00:24:17.738 Oversized SGL: Not Supported 00:24:17.738 SGL Metadata Address: Not Supported 00:24:17.738 SGL Offset: Supported 00:24:17.738 Transport SGL Data Block: Not Supported 00:24:17.738 Replay Protected Memory Block: Not Supported 00:24:17.738 00:24:17.738 Firmware Slot Information 00:24:17.738 ========================= 00:24:17.738 Active slot: 1 00:24:17.738 Slot 1 Firmware Revision: 24.09 00:24:17.738 00:24:17.738 00:24:17.738 Commands Supported and Effects 00:24:17.738 ============================== 00:24:17.738 Admin Commands 00:24:17.738 -------------- 00:24:17.738 Get Log Page (02h): Supported 00:24:17.738 Identify (06h): Supported 00:24:17.738 Abort (08h): Supported 00:24:17.738 Set Features (09h): Supported 00:24:17.738 Get Features (0Ah): Supported 00:24:17.738 Asynchronous Event Request (0Ch): Supported 00:24:17.738 Keep Alive (18h): Supported 00:24:17.738 I/O Commands 00:24:17.738 ------------ 00:24:17.739 Flush (00h): Supported LBA-Change 00:24:17.739 Write (01h): Supported LBA-Change 00:24:17.739 Read (02h): Supported 00:24:17.739 Compare (05h): Supported 00:24:17.739 Write Zeroes (08h): Supported LBA-Change 00:24:17.739 Dataset Management (09h): Supported LBA-Change 00:24:17.739 Copy (19h): Supported LBA-Change 00:24:17.739 00:24:17.739 Error Log 00:24:17.739 ========= 00:24:17.739 00:24:17.739 Arbitration 00:24:17.739 =========== 00:24:17.739 Arbitration Burst: 1 00:24:17.739 00:24:17.739 Power Management 00:24:17.739 ================ 00:24:17.739 Number of Power States: 1 00:24:17.739 Current Power State: Power State #0 00:24:17.739 Power State #0: 00:24:17.739 Max Power: 0.00 W 00:24:17.739 Non-Operational State: Operational 00:24:17.739 Entry Latency: Not Reported 00:24:17.739 Exit Latency: Not Reported 00:24:17.739 Relative Read Throughput: 0 00:24:17.739 Relative Read Latency: 0 00:24:17.739 Relative Write Throughput: 0 00:24:17.739 Relative Write Latency: 0 00:24:17.739 Idle Power: Not Reported 00:24:17.739 Active Power: Not Reported 00:24:17.739 Non-Operational Permissive Mode: Not Supported 00:24:17.739 00:24:17.739 Health Information 00:24:17.739 ================== 00:24:17.739 Critical Warnings: 00:24:17.739 Available Spare Space: OK 00:24:17.739 Temperature: OK 00:24:17.739 Device Reliability: OK 00:24:17.739 Read Only: No 00:24:17.739 Volatile Memory Backup: OK 00:24:17.739 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:17.739 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:17.739 Available Spare: 0% 00:24:17.739 Available Spare Threshold: 0% 00:24:17.739 Life Percentage Used:[2024-07-15 20:19:14.921809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.739 [2024-07-15 20:19:14.921814] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2276ec0) 00:24:17.739 [2024-07-15 20:19:14.921821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.739 [2024-07-15 20:19:14.921834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa8c0, cid 7, qid 0 00:24:17.739 [2024-07-15 20:19:14.922039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.739 [2024-07-15 20:19:14.922045] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.739 [2024-07-15 20:19:14.922049] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.739 [2024-07-15 20:19:14.922053] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa8c0) on tqpair=0x2276ec0 00:24:17.739 [2024-07-15 20:19:14.922082] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:17.739 [2024-07-15 20:19:14.922092] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f9e40) on tqpair=0x2276ec0 00:24:17.739 [2024-07-15 20:19:14.922098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.739 [2024-07-15 20:19:14.922103] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f9fc0) on tqpair=0x2276ec0 00:24:17.739 [2024-07-15 20:19:14.922108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.739 [2024-07-15 20:19:14.922114] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa140) on tqpair=0x2276ec0 00:24:17.739 [2024-07-15 20:19:14.922119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.739 [2024-07-15 20:19:14.922131] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa2c0) on tqpair=0x2276ec0 00:24:17.739 [2024-07-15 20:19:14.922136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.739 [2024-07-15 20:19:14.922144] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.739 [2024-07-15 20:19:14.922148] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.739 [2024-07-15 20:19:14.922151] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2276ec0) 00:24:17.739 [2024-07-15 20:19:14.922158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.739 [2024-07-15 20:19:14.922170] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa2c0, cid 3, qid 0 00:24:17.739 [2024-07-15 20:19:14.922389] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.739 [2024-07-15 20:19:14.922396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.739 [2024-07-15 20:19:14.922399] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.739 [2024-07-15 20:19:14.922403] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa2c0) on tqpair=0x2276ec0 00:24:17.739 [2024-07-15 20:19:14.922410] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.739 [2024-07-15 20:19:14.922414] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.739 [2024-07-15 20:19:14.922417] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2276ec0) 00:24:17.739 [2024-07-15 20:19:14.922424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.739 [2024-07-15 20:19:14.922437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa2c0, cid 3, qid 0 00:24:17.739 [2024-07-15 20:19:14.922663] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.739 [2024-07-15 20:19:14.922669] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.739 [2024-07-15 20:19:14.922673] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.739 [2024-07-15 20:19:14.922677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa2c0) on tqpair=0x2276ec0 00:24:17.739 [2024-07-15 20:19:14.922681] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:17.739 [2024-07-15 20:19:14.922686] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:17.739 [2024-07-15 20:19:14.922695] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.739 [2024-07-15 20:19:14.922699] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.739 [2024-07-15 20:19:14.922702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2276ec0) 00:24:17.739 [2024-07-15 20:19:14.922709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.739 [2024-07-15 20:19:14.922718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa2c0, cid 3, qid 0 00:24:17.739 [2024-07-15 20:19:14.922932] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.739 [2024-07-15 20:19:14.922938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.740 [2024-07-15 20:19:14.922942] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.740 [2024-07-15 20:19:14.922945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa2c0) on tqpair=0x2276ec0 00:24:17.740 [2024-07-15 20:19:14.922955] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.740 [2024-07-15 20:19:14.922959] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.740 [2024-07-15 20:19:14.922965] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2276ec0) 00:24:17.740 [2024-07-15 20:19:14.922971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.740 [2024-07-15 20:19:14.922981] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa2c0, cid 3, qid 0 00:24:17.740 [2024-07-15 20:19:14.927133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.740 [2024-07-15 20:19:14.927141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.740 [2024-07-15 20:19:14.927145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.740 [2024-07-15 20:19:14.927149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa2c0) on tqpair=0x2276ec0 00:24:17.740 [2024-07-15 20:19:14.927158] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.740 [2024-07-15 20:19:14.927162] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.740 [2024-07-15 20:19:14.927166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2276ec0) 00:24:17.740 [2024-07-15 20:19:14.927172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.740 [2024-07-15 20:19:14.927184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22fa2c0, cid 3, qid 0 00:24:17.740 [2024-07-15 20:19:14.927389] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.740 [2024-07-15 20:19:14.927395] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.740 [2024-07-15 20:19:14.927398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.740 [2024-07-15 20:19:14.927402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22fa2c0) on tqpair=0x2276ec0 00:24:17.740 [2024-07-15 20:19:14.927409] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:24:17.740 0% 00:24:17.740 Data Units Read: 0 00:24:17.740 Data Units Written: 0 00:24:17.740 Host Read Commands: 0 00:24:17.740 Host Write Commands: 0 00:24:17.740 Controller Busy Time: 0 minutes 00:24:17.740 Power Cycles: 0 00:24:17.740 Power On Hours: 0 hours 00:24:17.740 Unsafe Shutdowns: 0 00:24:17.740 Unrecoverable Media Errors: 0 00:24:17.740 Lifetime Error Log Entries: 0 00:24:17.740 Warning Temperature Time: 0 minutes 00:24:17.740 Critical Temperature Time: 0 minutes 00:24:17.740 00:24:17.740 Number of Queues 00:24:17.740 ================ 00:24:17.740 Number of I/O Submission Queues: 127 00:24:17.740 Number of I/O Completion Queues: 127 00:24:17.740 00:24:17.740 Active Namespaces 00:24:17.740 ================= 00:24:17.740 Namespace ID:1 00:24:17.740 Error Recovery Timeout: Unlimited 00:24:17.740 Command Set Identifier: NVM (00h) 00:24:17.740 Deallocate: Supported 00:24:17.740 Deallocated/Unwritten Error: Not Supported 00:24:17.740 Deallocated Read Value: Unknown 00:24:17.740 Deallocate in Write Zeroes: Not Supported 00:24:17.740 Deallocated Guard Field: 0xFFFF 00:24:17.740 Flush: Supported 00:24:17.740 Reservation: Supported 00:24:17.740 Namespace Sharing Capabilities: Multiple Controllers 00:24:17.740 Size (in LBAs): 131072 (0GiB) 00:24:17.740 Capacity (in LBAs): 131072 (0GiB) 00:24:17.740 Utilization (in LBAs): 131072 (0GiB) 00:24:17.740 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:17.740 EUI64: ABCDEF0123456789 00:24:17.740 UUID: 49e8dee6-f213-4085-a306-05ebb5774d79 00:24:17.740 Thin Provisioning: Not Supported 00:24:17.740 Per-NS Atomic Units: Yes 00:24:17.740 Atomic Boundary Size (Normal): 0 00:24:17.740 Atomic Boundary Size (PFail): 0 00:24:17.740 Atomic Boundary Offset: 0 00:24:17.740 Maximum Single Source Range Length: 65535 00:24:17.740 Maximum Copy Length: 65535 00:24:17.740 Maximum Source Range Count: 1 00:24:17.740 NGUID/EUI64 Never Reused: No 00:24:17.740 Namespace Write Protected: No 00:24:17.740 Number of LBA Formats: 1 00:24:17.740 Current LBA Format: LBA Format #00 00:24:17.740 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:17.740 00:24:17.740 20:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:17.740 20:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:17.740 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.740 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.740 20:19:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.740 20:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:17.740 20:19:14 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:17.740 20:19:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:17.740 20:19:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:17.740 20:19:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:17.740 20:19:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:17.740 20:19:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:17.740 20:19:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:17.740 rmmod nvme_tcp 00:24:17.740 rmmod nvme_fabrics 00:24:17.740 rmmod nvme_keyring 00:24:17.740 20:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:17.740 20:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:17.740 20:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:17.740 20:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1084501 ']' 00:24:17.740 20:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1084501 00:24:17.740 20:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1084501 ']' 00:24:17.740 20:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1084501 00:24:17.740 20:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:17.740 20:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:17.740 20:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1084501 00:24:17.740 20:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:17.740 20:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:17.740 20:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1084501' 00:24:17.740 killing process with pid 1084501 00:24:17.740 20:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1084501 00:24:17.740 20:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1084501 00:24:18.000 20:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:18.000 20:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:18.000 20:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:18.000 20:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:18.000 20:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:18.000 20:19:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.000 20:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:18.000 20:19:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.909 20:19:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:19.909 00:24:19.909 real 0m11.003s 00:24:19.909 user 0m7.673s 00:24:19.909 sys 0m5.749s 00:24:19.909 20:19:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:19.909 20:19:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.909 ************************************ 00:24:19.909 END TEST nvmf_identify 00:24:19.909 ************************************ 00:24:19.909 20:19:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:19.909 20:19:17 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:19.909 20:19:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:19.909 20:19:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:19.909 20:19:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:20.170 ************************************ 00:24:20.170 START TEST nvmf_perf 00:24:20.170 ************************************ 00:24:20.170 20:19:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:20.170 * Looking for test storage... 00:24:20.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.170 20:19:17 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.170 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:20.171 20:19:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:28.305 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:28.305 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:28.305 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:28.305 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:28.305 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:28.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:24:28.306 00:24:28.306 --- 10.0.0.2 ping statistics --- 00:24:28.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.306 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:24:28.306 00:24:28.306 --- 10.0.0.1 ping statistics --- 00:24:28.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.306 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1089047 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1089047 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1089047 ']' 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.306 20:19:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.306 [2024-07-15 20:19:24.852149] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:24:28.306 [2024-07-15 20:19:24.852215] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.306 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.306 [2024-07-15 20:19:24.924859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.306 [2024-07-15 20:19:24.999014] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.306 [2024-07-15 20:19:24.999055] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.306 [2024-07-15 20:19:24.999062] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.306 [2024-07-15 20:19:24.999068] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.306 [2024-07-15 20:19:24.999074] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.306 [2024-07-15 20:19:24.999162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.306 [2024-07-15 20:19:24.999230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.306 [2024-07-15 20:19:24.999427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.306 [2024-07-15 20:19:24.999429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.306 20:19:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:28.306 20:19:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:28.306 20:19:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:28.306 20:19:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:28.306 20:19:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.306 20:19:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.306 20:19:25 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:28.306 20:19:25 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:28.875 20:19:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:28.875 20:19:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:29.135 20:19:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:29.135 20:19:26 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:29.135 20:19:26 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:29.135 20:19:26 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:29.135 20:19:26 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:29.135 20:19:26 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:29.135 20:19:26 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:29.394 [2024-07-15 20:19:26.653316] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.394 20:19:26 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:29.654 20:19:26 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:29.654 20:19:26 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:29.654 20:19:27 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:29.654 20:19:27 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:29.914 20:19:27 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.914 [2024-07-15 20:19:27.331889] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.174 20:19:27 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:30.174 20:19:27 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:30.174 20:19:27 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:30.174 20:19:27 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:30.174 20:19:27 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:31.554 Initializing NVMe Controllers 00:24:31.554 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:31.554 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:31.554 Initialization complete. Launching workers. 00:24:31.554 ======================================================== 00:24:31.554 Latency(us) 00:24:31.554 Device Information : IOPS MiB/s Average min max 00:24:31.554 PCIE (0000:65:00.0) NSID 1 from core 0: 79884.97 312.05 400.03 13.31 5206.98 00:24:31.554 ======================================================== 00:24:31.554 Total : 79884.97 312.05 400.03 13.31 5206.98 00:24:31.554 00:24:31.554 20:19:28 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:31.554 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.951 Initializing NVMe Controllers 00:24:32.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:32.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:32.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:32.951 Initialization complete. Launching workers. 00:24:32.951 ======================================================== 00:24:32.951 Latency(us) 00:24:32.951 Device Information : IOPS MiB/s Average min max 00:24:32.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 69.00 0.27 15123.35 466.48 45513.26 00:24:32.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16468.90 6985.64 47895.66 00:24:32.951 ======================================================== 00:24:32.951 Total : 130.00 0.51 15754.72 466.48 47895.66 00:24:32.951 00:24:32.951 20:19:30 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:32.951 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.939 Initializing NVMe Controllers 00:24:33.939 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:33.939 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:33.939 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:33.939 Initialization complete. Launching workers. 00:24:33.939 ======================================================== 00:24:33.939 Latency(us) 00:24:33.939 Device Information : IOPS MiB/s Average min max 00:24:33.939 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9493.41 37.08 3371.82 553.21 7810.30 00:24:33.939 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3868.76 15.11 8315.03 6167.75 16431.74 00:24:33.939 ======================================================== 00:24:33.939 Total : 13362.17 52.20 4803.03 553.21 16431.74 00:24:33.939 00:24:33.939 20:19:31 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:33.939 20:19:31 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:33.939 20:19:31 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:33.939 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.485 Initializing NVMe Controllers 00:24:36.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:36.485 Controller IO queue size 128, less than required. 00:24:36.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:36.485 Controller IO queue size 128, less than required. 00:24:36.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:36.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:36.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:36.485 Initialization complete. Launching workers. 00:24:36.485 ======================================================== 00:24:36.485 Latency(us) 00:24:36.485 Device Information : IOPS MiB/s Average min max 00:24:36.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 901.81 225.45 148467.92 78398.17 239222.81 00:24:36.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 592.38 148.09 222990.77 97324.17 305172.74 00:24:36.485 ======================================================== 00:24:36.485 Total : 1494.19 373.55 178012.77 78398.17 305172.74 00:24:36.485 00:24:36.485 20:19:33 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:36.485 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.746 No valid NVMe controllers or AIO or URING devices found 00:24:36.746 Initializing NVMe Controllers 00:24:36.746 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:36.746 Controller IO queue size 128, less than required. 00:24:36.746 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:36.746 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:36.746 Controller IO queue size 128, less than required. 00:24:36.746 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:36.746 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:36.746 WARNING: Some requested NVMe devices were skipped 00:24:36.746 20:19:33 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:36.746 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.288 Initializing NVMe Controllers 00:24:39.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:39.288 Controller IO queue size 128, less than required. 00:24:39.288 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:39.288 Controller IO queue size 128, less than required. 00:24:39.288 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:39.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:39.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:39.288 Initialization complete. Launching workers. 00:24:39.288 00:24:39.288 ==================== 00:24:39.288 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:39.288 TCP transport: 00:24:39.288 polls: 42565 00:24:39.288 idle_polls: 17589 00:24:39.288 sock_completions: 24976 00:24:39.288 nvme_completions: 3833 00:24:39.288 submitted_requests: 5792 00:24:39.288 queued_requests: 1 00:24:39.288 00:24:39.288 ==================== 00:24:39.288 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:39.288 TCP transport: 00:24:39.288 polls: 40131 00:24:39.288 idle_polls: 15879 00:24:39.288 sock_completions: 24252 00:24:39.288 nvme_completions: 3723 00:24:39.288 submitted_requests: 5640 00:24:39.288 queued_requests: 1 00:24:39.288 ======================================================== 00:24:39.288 Latency(us) 00:24:39.288 Device Information : IOPS MiB/s Average min max 00:24:39.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 957.37 239.34 137628.47 59415.53 220769.09 00:24:39.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 929.89 232.47 140414.96 63240.90 221636.60 00:24:39.288 ======================================================== 00:24:39.288 Total : 1887.25 471.81 139001.43 59415.53 221636.60 00:24:39.288 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:39.288 rmmod nvme_tcp 00:24:39.288 rmmod nvme_fabrics 00:24:39.288 rmmod nvme_keyring 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1089047 ']' 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1089047 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1089047 ']' 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1089047 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:39.288 20:19:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1089047 00:24:39.548 20:19:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:39.548 20:19:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:39.548 20:19:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1089047' 00:24:39.548 killing process with pid 1089047 00:24:39.548 20:19:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1089047 00:24:39.548 20:19:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1089047 00:24:41.455 20:19:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:41.455 20:19:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:41.455 20:19:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:41.455 20:19:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:41.455 20:19:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:41.455 20:19:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.455 20:19:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:41.455 20:19:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.367 20:19:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:43.628 00:24:43.628 real 0m23.424s 00:24:43.628 user 0m56.629s 00:24:43.628 sys 0m7.718s 00:24:43.628 20:19:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:43.628 20:19:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:43.628 ************************************ 00:24:43.628 END TEST nvmf_perf 00:24:43.628 ************************************ 00:24:43.628 20:19:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:43.628 20:19:40 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:43.628 20:19:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:43.628 20:19:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:43.628 20:19:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:43.628 ************************************ 00:24:43.628 START TEST nvmf_fio_host 00:24:43.628 ************************************ 00:24:43.628 20:19:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:43.628 * Looking for test storage... 00:24:43.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:43.628 20:19:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.628 20:19:40 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.628 20:19:40 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.628 20:19:40 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.629 20:19:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.629 20:19:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.629 20:19:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.629 20:19:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:43.629 20:19:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.629 20:19:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:43.629 20:19:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:43.629 20:19:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.629 20:19:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.629 20:19:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.629 20:19:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.629 20:19:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.629 20:19:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.629 20:19:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.629 20:19:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.629 20:19:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.629 20:19:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:43.629 20:19:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:51.769 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:51.769 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:51.769 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:51.769 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.769 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.770 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:51.770 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:51.770 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.770 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.770 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.770 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.770 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:51.770 20:19:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:51.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:24:51.770 00:24:51.770 --- 10.0.0.2 ping statistics --- 00:24:51.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.770 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:24:51.770 00:24:51.770 --- 10.0.0.1 ping statistics --- 00:24:51.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.770 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1095784 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1095784 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1095784 ']' 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.770 [2024-07-15 20:19:48.201530] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:24:51.770 [2024-07-15 20:19:48.201591] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.770 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.770 [2024-07-15 20:19:48.271640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:51.770 [2024-07-15 20:19:48.346669] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.770 [2024-07-15 20:19:48.346706] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.770 [2024-07-15 20:19:48.346714] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.770 [2024-07-15 20:19:48.346721] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.770 [2024-07-15 20:19:48.346726] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.770 [2024-07-15 20:19:48.346872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.770 [2024-07-15 20:19:48.346982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.770 [2024-07-15 20:19:48.347157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.770 [2024-07-15 20:19:48.347157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:24:51.770 20:19:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:51.770 [2024-07-15 20:19:49.126132] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.770 20:19:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:51.770 20:19:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:51.770 20:19:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.031 20:19:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:52.031 Malloc1 00:24:52.031 20:19:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:52.291 20:19:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:52.291 20:19:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:52.551 [2024-07-15 20:19:49.852060] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:52.551 20:19:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:52.812 20:19:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:53.072 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:53.072 fio-3.35 00:24:53.072 Starting 1 thread 00:24:53.072 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.637 00:24:55.637 test: (groupid=0, jobs=1): err= 0: pid=1096369: Mon Jul 15 20:19:52 2024 00:24:55.637 read: IOPS=13.7k, BW=53.6MiB/s (56.2MB/s)(107MiB/2003msec) 00:24:55.637 slat (usec): min=2, max=256, avg= 2.17, stdev= 2.16 00:24:55.637 clat (usec): min=2769, max=9808, avg=5302.13, stdev=794.18 00:24:55.637 lat (usec): min=2771, max=9810, avg=5304.30, stdev=794.30 00:24:55.637 clat percentiles (usec): 00:24:55.637 | 1.00th=[ 3884], 5.00th=[ 4359], 10.00th=[ 4555], 20.00th=[ 4752], 00:24:55.637 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5276], 00:24:55.637 | 70.00th=[ 5473], 80.00th=[ 5669], 90.00th=[ 6325], 95.00th=[ 6915], 00:24:55.637 | 99.00th=[ 8094], 99.50th=[ 8455], 99.90th=[ 9241], 99.95th=[ 9372], 00:24:55.637 | 99.99th=[ 9765] 00:24:55.637 bw ( KiB/s): min=53496, max=55432, per=99.90%, avg=54856.00, stdev=911.39, samples=4 00:24:55.637 iops : min=13374, max=13858, avg=13714.00, stdev=227.85, samples=4 00:24:55.637 write: IOPS=13.7k, BW=53.5MiB/s (56.1MB/s)(107MiB/2003msec); 0 zone resets 00:24:55.637 slat (usec): min=2, max=235, avg= 2.27, stdev= 1.57 00:24:55.637 clat (usec): min=2032, max=7269, avg=3967.48, stdev=498.10 00:24:55.637 lat (usec): min=2034, max=7271, avg=3969.75, stdev=498.17 00:24:55.637 clat percentiles (usec): 00:24:55.637 | 1.00th=[ 2638], 5.00th=[ 3032], 10.00th=[ 3294], 20.00th=[ 3589], 00:24:55.637 | 30.00th=[ 3785], 40.00th=[ 3916], 50.00th=[ 4015], 60.00th=[ 4113], 00:24:55.637 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4686], 00:24:55.637 | 99.00th=[ 5145], 99.50th=[ 5407], 99.90th=[ 6194], 99.95th=[ 6521], 00:24:55.637 | 99.99th=[ 7046] 00:24:55.638 bw ( KiB/s): min=53936, max=55528, per=99.94%, avg=54774.00, stdev=655.00, samples=4 00:24:55.638 iops : min=13484, max=13882, avg=13693.50, stdev=163.75, samples=4 00:24:55.638 lat (msec) : 4=24.72%, 10=75.28% 00:24:55.638 cpu : usr=68.83%, sys=25.62%, ctx=30, majf=0, minf=7 00:24:55.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:55.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:55.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:55.638 issued rwts: total=27497,27445,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:55.638 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:55.638 00:24:55.638 Run status group 0 (all jobs): 00:24:55.638 READ: bw=53.6MiB/s (56.2MB/s), 53.6MiB/s-53.6MiB/s (56.2MB/s-56.2MB/s), io=107MiB (113MB), run=2003-2003msec 00:24:55.638 WRITE: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=107MiB (112MB), run=2003-2003msec 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:55.638 20:19:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:55.907 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:55.907 fio-3.35 00:24:55.907 Starting 1 thread 00:24:55.907 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.453 00:24:58.453 test: (groupid=0, jobs=1): err= 0: pid=1097139: Mon Jul 15 20:19:55 2024 00:24:58.453 read: IOPS=8683, BW=136MiB/s (142MB/s)(273MiB/2009msec) 00:24:58.453 slat (usec): min=3, max=114, avg= 3.64, stdev= 1.70 00:24:58.453 clat (usec): min=1593, max=22086, avg=9173.65, stdev=2466.94 00:24:58.453 lat (usec): min=1596, max=22089, avg=9177.29, stdev=2467.18 00:24:58.453 clat percentiles (usec): 00:24:58.453 | 1.00th=[ 4621], 5.00th=[ 5473], 10.00th=[ 6128], 20.00th=[ 7046], 00:24:58.453 | 30.00th=[ 7701], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[ 9503], 00:24:58.453 | 70.00th=[10159], 80.00th=[11338], 90.00th=[12649], 95.00th=[13566], 00:24:58.453 | 99.00th=[15533], 99.50th=[16319], 99.90th=[17957], 99.95th=[18220], 00:24:58.453 | 99.99th=[18482] 00:24:58.453 bw ( KiB/s): min=62528, max=75232, per=49.47%, avg=68736.00, stdev=5912.93, samples=4 00:24:58.453 iops : min= 3908, max= 4702, avg=4296.00, stdev=369.56, samples=4 00:24:58.453 write: IOPS=4960, BW=77.5MiB/s (81.3MB/s)(140MiB/1803msec); 0 zone resets 00:24:58.453 slat (usec): min=40, max=361, avg=41.24, stdev= 8.24 00:24:58.453 clat (usec): min=2861, max=18444, avg=9782.31, stdev=1903.08 00:24:58.453 lat (usec): min=2901, max=18488, avg=9823.55, stdev=1905.37 00:24:58.453 clat percentiles (usec): 00:24:58.453 | 1.00th=[ 6194], 5.00th=[ 6980], 10.00th=[ 7504], 20.00th=[ 8160], 00:24:58.453 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:24:58.453 | 70.00th=[10421], 80.00th=[11207], 90.00th=[12387], 95.00th=[13304], 00:24:58.453 | 99.00th=[15270], 99.50th=[15795], 99.90th=[16712], 99.95th=[17171], 00:24:58.453 | 99.99th=[18482] 00:24:58.453 bw ( KiB/s): min=65504, max=77984, per=90.15%, avg=71552.00, stdev=5892.75, samples=4 00:24:58.453 iops : min= 4094, max= 4874, avg=4472.00, stdev=368.30, samples=4 00:24:58.453 lat (msec) : 2=0.03%, 4=0.18%, 10=64.28%, 20=35.51%, 50=0.01% 00:24:58.453 cpu : usr=82.32%, sys=13.79%, ctx=15, majf=0, minf=10 00:24:58.453 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:58.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:58.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:58.453 issued rwts: total=17446,8944,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:58.453 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:58.453 00:24:58.453 Run status group 0 (all jobs): 00:24:58.453 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=273MiB (286MB), run=2009-2009msec 00:24:58.453 WRITE: bw=77.5MiB/s (81.3MB/s), 77.5MiB/s-77.5MiB/s (81.3MB/s-81.3MB/s), io=140MiB (147MB), run=1803-1803msec 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:58.453 rmmod nvme_tcp 00:24:58.453 rmmod nvme_fabrics 00:24:58.453 rmmod nvme_keyring 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1095784 ']' 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1095784 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1095784 ']' 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1095784 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1095784 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1095784' 00:24:58.453 killing process with pid 1095784 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1095784 00:24:58.453 20:19:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1095784 00:24:58.713 20:19:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:58.713 20:19:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:58.713 20:19:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:58.713 20:19:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:58.713 20:19:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:58.713 20:19:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.713 20:19:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.713 20:19:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.658 20:19:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:00.919 00:25:00.919 real 0m17.204s 00:25:00.919 user 1m6.146s 00:25:00.919 sys 0m7.158s 00:25:00.919 20:19:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:00.919 20:19:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.919 ************************************ 00:25:00.919 END TEST nvmf_fio_host 00:25:00.919 ************************************ 00:25:00.919 20:19:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:00.919 20:19:58 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:00.919 20:19:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:00.919 20:19:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:00.919 20:19:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:00.919 ************************************ 00:25:00.919 START TEST nvmf_failover 00:25:00.919 ************************************ 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:00.919 * Looking for test storage... 00:25:00.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:00.919 20:19:58 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:00.920 20:19:58 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:00.920 20:19:58 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:00.920 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:00.920 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.920 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:00.920 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:00.920 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:00.920 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.920 20:19:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:00.920 20:19:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.920 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:00.920 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:00.920 20:19:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:00.920 20:19:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:09.068 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:09.068 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:09.068 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:09.068 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:09.068 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:09.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:09.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:25:09.068 00:25:09.068 --- 10.0.0.2 ping statistics --- 00:25:09.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.069 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:09.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:09.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.387 ms 00:25:09.069 00:25:09.069 --- 10.0.0.1 ping statistics --- 00:25:09.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.069 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1101788 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1101788 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1101788 ']' 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:09.069 20:20:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:09.069 [2024-07-15 20:20:05.558463] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:25:09.069 [2024-07-15 20:20:05.558528] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:09.069 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.069 [2024-07-15 20:20:05.646636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:09.069 [2024-07-15 20:20:05.740318] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:09.069 [2024-07-15 20:20:05.740374] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:09.069 [2024-07-15 20:20:05.740382] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:09.069 [2024-07-15 20:20:05.740389] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:09.069 [2024-07-15 20:20:05.740395] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:09.069 [2024-07-15 20:20:05.740526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:09.069 [2024-07-15 20:20:05.740692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.069 [2024-07-15 20:20:05.740692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:09.069 20:20:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:09.069 20:20:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:09.069 20:20:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:09.069 20:20:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:09.069 20:20:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:09.069 20:20:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.069 20:20:06 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:09.330 [2024-07-15 20:20:06.521878] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.330 20:20:06 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:09.330 Malloc0 00:25:09.330 20:20:06 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:09.591 20:20:06 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:09.855 20:20:07 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:09.855 [2024-07-15 20:20:07.225801] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.855 20:20:07 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:10.117 [2024-07-15 20:20:07.394245] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:10.117 20:20:07 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:10.378 [2024-07-15 20:20:07.566796] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:10.378 20:20:07 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1102156 00:25:10.378 20:20:07 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:10.378 20:20:07 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:10.378 20:20:07 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1102156 /var/tmp/bdevperf.sock 00:25:10.378 20:20:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1102156 ']' 00:25:10.378 20:20:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:10.378 20:20:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:10.378 20:20:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:10.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:10.378 20:20:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:10.378 20:20:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:11.320 20:20:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:11.320 20:20:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:11.320 20:20:08 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:11.320 NVMe0n1 00:25:11.320 20:20:08 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:11.892 00:25:11.892 20:20:09 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1102489 00:25:11.892 20:20:09 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:11.892 20:20:09 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:12.834 20:20:10 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.834 [2024-07-15 20:20:10.233069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233141] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233145] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233198] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233202] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233219] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233224] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233233] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233256] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233260] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233281] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233286] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233328] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233350] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233359] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233369] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233387] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233414] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233418] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233431] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233436] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233444] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233448] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233452] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.834 [2024-07-15 20:20:10.233461] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.835 [2024-07-15 20:20:10.233465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.835 [2024-07-15 20:20:10.233470] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.835 [2024-07-15 20:20:10.233474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.835 [2024-07-15 20:20:10.233478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.835 [2024-07-15 20:20:10.233483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.835 [2024-07-15 20:20:10.233488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.835 [2024-07-15 20:20:10.233492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.835 [2024-07-15 20:20:10.233497] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.835 [2024-07-15 20:20:10.233501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.835 [2024-07-15 20:20:10.233505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.835 [2024-07-15 20:20:10.233509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.835 [2024-07-15 20:20:10.233514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.835 [2024-07-15 20:20:10.233519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.835 [2024-07-15 20:20:10.233523] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc50 is same with the state(5) to be set 00:25:12.835 20:20:10 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:16.135 20:20:13 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:16.135 00:25:16.397 20:20:13 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:16.397 [2024-07-15 20:20:13.716722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716763] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716803] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716812] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716851] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716860] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716877] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716899] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716924] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716928] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716943] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716956] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716960] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716969] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716977] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.716995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.717000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.717004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.717009] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.717013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.717017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.717022] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.397 [2024-07-15 20:20:13.717026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717067] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717076] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717081] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717089] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717098] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717106] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717115] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717134] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717138] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 [2024-07-15 20:20:13.717177] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3e3a0 is same with the state(5) to be set 00:25:16.398 20:20:13 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:19.706 20:20:16 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.706 [2024-07-15 20:20:16.891319] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.706 20:20:16 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:20.647 20:20:17 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:20.647 [2024-07-15 20:20:18.065015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065049] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065064] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065073] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065081] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065112] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065116] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065120] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065164] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065177] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.647 [2024-07-15 20:20:18.065185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ea80 is same with the state(5) to be set 00:25:20.908 20:20:18 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1102489 00:25:27.503 0 00:25:27.503 20:20:24 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1102156 00:25:27.503 20:20:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1102156 ']' 00:25:27.503 20:20:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1102156 00:25:27.503 20:20:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:27.503 20:20:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:27.503 20:20:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1102156 00:25:27.503 20:20:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:27.503 20:20:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:27.503 20:20:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1102156' 00:25:27.503 killing process with pid 1102156 00:25:27.503 20:20:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1102156 00:25:27.503 20:20:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1102156 00:25:27.503 20:20:24 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:27.503 [2024-07-15 20:20:07.643938] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:25:27.503 [2024-07-15 20:20:07.643995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1102156 ] 00:25:27.503 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.503 [2024-07-15 20:20:07.704373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.503 [2024-07-15 20:20:07.769286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.503 Running I/O for 15 seconds... 00:25:27.503 [2024-07-15 20:20:10.233900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.233935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.233954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.233962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.233972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.233980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.233990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.233997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.234007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.234014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.234023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.234031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.234040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.234047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.234056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.234063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.234073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.234080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.234089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.234096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.234105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.234112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.234132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.234140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.234150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.234157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.234166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.234173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.234182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.234189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.234198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.234205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.234215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.234223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.234232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.234239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.234248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.234255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.234265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.234271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.503 [2024-07-15 20:20:10.234280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.503 [2024-07-15 20:20:10.234287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.504 [2024-07-15 20:20:10.234923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.504 [2024-07-15 20:20:10.234932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.234939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.234949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.234956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.234965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.234972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.234981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.234988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.234997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.505 [2024-07-15 20:20:10.235595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.505 [2024-07-15 20:20:10.235611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.505 [2024-07-15 20:20:10.235628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.505 [2024-07-15 20:20:10.235637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.506 [2024-07-15 20:20:10.235644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.506 [2024-07-15 20:20:10.235659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.506 [2024-07-15 20:20:10.235675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.506 [2024-07-15 20:20:10.235691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.506 [2024-07-15 20:20:10.235707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.235728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.235744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.235761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.235778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.235794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.235810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.235826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.235842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.235858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.235874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.235889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.235905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.235921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.235937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.235953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.235969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.235989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.235998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.236005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.236014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:10.236021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.236029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1668300 is same with the state(5) to be set 00:25:27.506 [2024-07-15 20:20:10.236037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.506 [2024-07-15 20:20:10.236043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.506 [2024-07-15 20:20:10.236050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102328 len:8 PRP1 0x0 PRP2 0x0 00:25:27.506 [2024-07-15 20:20:10.236057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.236096] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1668300 was disconnected and freed. reset controller. 00:25:27.506 [2024-07-15 20:20:10.236105] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:27.506 [2024-07-15 20:20:10.236128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.506 [2024-07-15 20:20:10.236136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.236144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.506 [2024-07-15 20:20:10.236151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.236159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.506 [2024-07-15 20:20:10.236166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.236173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.506 [2024-07-15 20:20:10.236180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:10.236187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.506 [2024-07-15 20:20:10.239782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.506 [2024-07-15 20:20:10.239803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1646ef0 (9): Bad file descriptor 00:25:27.506 [2024-07-15 20:20:10.407729] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:27.506 [2024-07-15 20:20:13.718920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:13.718958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:13.718979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:13.718987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:13.718997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:13.719004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:13.719014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:13.719021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:13.719030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:13.719037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:13.719046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:13.719053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:13.719062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:13.719069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:13.719078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:13.719085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:13.719094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:13.719101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:13.719111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.506 [2024-07-15 20:20:13.719117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.506 [2024-07-15 20:20:13.719132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.507 [2024-07-15 20:20:13.719475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.507 [2024-07-15 20:20:13.719491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.507 [2024-07-15 20:20:13.719507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.507 [2024-07-15 20:20:13.719523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.507 [2024-07-15 20:20:13.719539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.507 [2024-07-15 20:20:13.719555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.507 [2024-07-15 20:20:13.719571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.507 [2024-07-15 20:20:13.719588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.507 [2024-07-15 20:20:13.719605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.507 [2024-07-15 20:20:13.719620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.507 [2024-07-15 20:20:13.719636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.507 [2024-07-15 20:20:13.719651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.507 [2024-07-15 20:20:13.719667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.507 [2024-07-15 20:20:13.719683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.507 [2024-07-15 20:20:13.719692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.719983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.719993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.508 [2024-07-15 20:20:13.720369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.508 [2024-07-15 20:20:13.720377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.509 [2024-07-15 20:20:13.720384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.509 [2024-07-15 20:20:13.720405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.509 [2024-07-15 20:20:13.720421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.509 [2024-07-15 20:20:13.720436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.509 [2024-07-15 20:20:13.720452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.509 [2024-07-15 20:20:13.720468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.509 [2024-07-15 20:20:13.720483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.509 [2024-07-15 20:20:13.720499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.509 [2024-07-15 20:20:13.720515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.509 [2024-07-15 20:20:13.720531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.509 [2024-07-15 20:20:13.720546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.509 [2024-07-15 20:20:13.720562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.509 [2024-07-15 20:20:13.720578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.509 [2024-07-15 20:20:13.720593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.509 [2024-07-15 20:20:13.720611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.509 [2024-07-15 20:20:13.720638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90048 len:8 PRP1 0x0 PRP2 0x0 00:25:27.509 [2024-07-15 20:20:13.720645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.509 [2024-07-15 20:20:13.720660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.509 [2024-07-15 20:20:13.720666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90056 len:8 PRP1 0x0 PRP2 0x0 00:25:27.509 [2024-07-15 20:20:13.720673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.509 [2024-07-15 20:20:13.720685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.509 [2024-07-15 20:20:13.720691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90064 len:8 PRP1 0x0 PRP2 0x0 00:25:27.509 [2024-07-15 20:20:13.720698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.509 [2024-07-15 20:20:13.720710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.509 [2024-07-15 20:20:13.720717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90072 len:8 PRP1 0x0 PRP2 0x0 00:25:27.509 [2024-07-15 20:20:13.720724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.509 [2024-07-15 20:20:13.720736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.509 [2024-07-15 20:20:13.720742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90080 len:8 PRP1 0x0 PRP2 0x0 00:25:27.509 [2024-07-15 20:20:13.720749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.509 [2024-07-15 20:20:13.720762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.509 [2024-07-15 20:20:13.720767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90088 len:8 PRP1 0x0 PRP2 0x0 00:25:27.509 [2024-07-15 20:20:13.720774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.509 [2024-07-15 20:20:13.720786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.509 [2024-07-15 20:20:13.720792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90096 len:8 PRP1 0x0 PRP2 0x0 00:25:27.509 [2024-07-15 20:20:13.720799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.509 [2024-07-15 20:20:13.720813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.509 [2024-07-15 20:20:13.720819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90104 len:8 PRP1 0x0 PRP2 0x0 00:25:27.509 [2024-07-15 20:20:13.720826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.509 [2024-07-15 20:20:13.720838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.509 [2024-07-15 20:20:13.720844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:8 PRP1 0x0 PRP2 0x0 00:25:27.509 [2024-07-15 20:20:13.720851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.509 [2024-07-15 20:20:13.720864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.509 [2024-07-15 20:20:13.720869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90120 len:8 PRP1 0x0 PRP2 0x0 00:25:27.509 [2024-07-15 20:20:13.720876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.509 [2024-07-15 20:20:13.720888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.509 [2024-07-15 20:20:13.720894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90128 len:8 PRP1 0x0 PRP2 0x0 00:25:27.509 [2024-07-15 20:20:13.720901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.509 [2024-07-15 20:20:13.720914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.509 [2024-07-15 20:20:13.720920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90136 len:8 PRP1 0x0 PRP2 0x0 00:25:27.509 [2024-07-15 20:20:13.720927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.509 [2024-07-15 20:20:13.720939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.509 [2024-07-15 20:20:13.720945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90144 len:8 PRP1 0x0 PRP2 0x0 00:25:27.509 [2024-07-15 20:20:13.720952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.509 [2024-07-15 20:20:13.720965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.509 [2024-07-15 20:20:13.720971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90152 len:8 PRP1 0x0 PRP2 0x0 00:25:27.509 [2024-07-15 20:20:13.720978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.720985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.509 [2024-07-15 20:20:13.720990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.509 [2024-07-15 20:20:13.720996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90160 len:8 PRP1 0x0 PRP2 0x0 00:25:27.509 [2024-07-15 20:20:13.721008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.721016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.509 [2024-07-15 20:20:13.721022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.509 [2024-07-15 20:20:13.721027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90168 len:8 PRP1 0x0 PRP2 0x0 00:25:27.509 [2024-07-15 20:20:13.721034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.509 [2024-07-15 20:20:13.721042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.509 [2024-07-15 20:20:13.721047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.509 [2024-07-15 20:20:13.721053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90176 len:8 PRP1 0x0 PRP2 0x0 00:25:27.509 [2024-07-15 20:20:13.721060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:13.721067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.510 [2024-07-15 20:20:13.721072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.510 [2024-07-15 20:20:13.721078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90184 len:8 PRP1 0x0 PRP2 0x0 00:25:27.510 [2024-07-15 20:20:13.721085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:13.721092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.510 [2024-07-15 20:20:13.721098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.510 [2024-07-15 20:20:13.731880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89424 len:8 PRP1 0x0 PRP2 0x0 00:25:27.510 [2024-07-15 20:20:13.731910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:13.731924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.510 [2024-07-15 20:20:13.731929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.510 [2024-07-15 20:20:13.731936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89432 len:8 PRP1 0x0 PRP2 0x0 00:25:27.510 [2024-07-15 20:20:13.731943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:13.731950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.510 [2024-07-15 20:20:13.731955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.510 [2024-07-15 20:20:13.731961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89440 len:8 PRP1 0x0 PRP2 0x0 00:25:27.510 [2024-07-15 20:20:13.731968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:13.731975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.510 [2024-07-15 20:20:13.731981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.510 [2024-07-15 20:20:13.731986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89448 len:8 PRP1 0x0 PRP2 0x0 00:25:27.510 [2024-07-15 20:20:13.731993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:13.732000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.510 [2024-07-15 20:20:13.732005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.510 [2024-07-15 20:20:13.732011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89456 len:8 PRP1 0x0 PRP2 0x0 00:25:27.510 [2024-07-15 20:20:13.732023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:13.732031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.510 [2024-07-15 20:20:13.732036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.510 [2024-07-15 20:20:13.732042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89464 len:8 PRP1 0x0 PRP2 0x0 00:25:27.510 [2024-07-15 20:20:13.732049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:13.732056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.510 [2024-07-15 20:20:13.732061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.510 [2024-07-15 20:20:13.732067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89472 len:8 PRP1 0x0 PRP2 0x0 00:25:27.510 [2024-07-15 20:20:13.732074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:13.732112] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x166a270 was disconnected and freed. reset controller. 00:25:27.510 [2024-07-15 20:20:13.732129] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:27.510 [2024-07-15 20:20:13.732156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.510 [2024-07-15 20:20:13.732165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:13.732173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.510 [2024-07-15 20:20:13.732181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:13.732188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.510 [2024-07-15 20:20:13.732195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:13.732203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.510 [2024-07-15 20:20:13.732210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:13.732217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.510 [2024-07-15 20:20:13.732255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1646ef0 (9): Bad file descriptor 00:25:27.510 [2024-07-15 20:20:13.735811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.510 [2024-07-15 20:20:13.854475] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:27.510 [2024-07-15 20:20:18.066022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.510 [2024-07-15 20:20:18.066059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.510 [2024-07-15 20:20:18.066078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.510 [2024-07-15 20:20:18.066098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.510 [2024-07-15 20:20:18.066113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646ef0 is same with the state(5) to be set 00:25:27.510 [2024-07-15 20:20:18.066185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.510 [2024-07-15 20:20:18.066195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.510 [2024-07-15 20:20:18.066216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.510 [2024-07-15 20:20:18.066233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.510 [2024-07-15 20:20:18.066249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.510 [2024-07-15 20:20:18.066265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.510 [2024-07-15 20:20:18.066281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.510 [2024-07-15 20:20:18.066297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.510 [2024-07-15 20:20:18.066313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.510 [2024-07-15 20:20:18.066329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.510 [2024-07-15 20:20:18.066345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.510 [2024-07-15 20:20:18.066363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.510 [2024-07-15 20:20:18.066379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.510 [2024-07-15 20:20:18.066395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.510 [2024-07-15 20:20:18.066411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.510 [2024-07-15 20:20:18.066428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.510 [2024-07-15 20:20:18.066444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.510 [2024-07-15 20:20:18.066460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.510 [2024-07-15 20:20:18.066476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.510 [2024-07-15 20:20:18.066484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.066985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.066992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.067001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.067008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.067017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.067024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.067033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.511 [2024-07-15 20:20:18.067040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.511 [2024-07-15 20:20:18.067048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.512 [2024-07-15 20:20:18.067221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.512 [2024-07-15 20:20:18.067239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.512 [2024-07-15 20:20:18.067255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.512 [2024-07-15 20:20:18.067272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.512 [2024-07-15 20:20:18.067288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.512 [2024-07-15 20:20:18.067304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.512 [2024-07-15 20:20:18.067321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.512 [2024-07-15 20:20:18.067339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.512 [2024-07-15 20:20:18.067356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.512 [2024-07-15 20:20:18.067373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.512 [2024-07-15 20:20:18.067391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.512 [2024-07-15 20:20:18.067407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.512 [2024-07-15 20:20:18.067423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.512 [2024-07-15 20:20:18.067439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.512 [2024-07-15 20:20:18.067455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.512 [2024-07-15 20:20:18.067471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.512 [2024-07-15 20:20:18.067487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.512 [2024-07-15 20:20:18.067503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.512 [2024-07-15 20:20:18.067736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.512 [2024-07-15 20:20:18.067743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.067752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.513 [2024-07-15 20:20:18.067759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.067768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.513 [2024-07-15 20:20:18.067775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.067784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.513 [2024-07-15 20:20:18.067792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.067801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.513 [2024-07-15 20:20:18.067808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.067816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.513 [2024-07-15 20:20:18.067823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.067833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.513 [2024-07-15 20:20:18.067840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.067848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.513 [2024-07-15 20:20:18.067855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.067864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.513 [2024-07-15 20:20:18.067871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.067880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.513 [2024-07-15 20:20:18.067887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.067896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.513 [2024-07-15 20:20:18.067903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.067911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.513 [2024-07-15 20:20:18.067918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.067927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.513 [2024-07-15 20:20:18.067934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.067944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.067950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.067959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.067966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.067975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.067981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.067991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.067999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.068008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.068015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.068024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.068031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.068040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.068047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.068056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.068063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.068073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.068079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.068088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.068095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.068105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.068111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.068120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.068131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.068140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.068147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.068156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.068163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.068172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.068179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.068188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.068194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.068209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.068216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.068225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.068232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.068241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.513 [2024-07-15 20:20:18.068248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.068267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.513 [2024-07-15 20:20:18.068273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.513 [2024-07-15 20:20:18.068279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128504 len:8 PRP1 0x0 PRP2 0x0 00:25:27.513 [2024-07-15 20:20:18.068287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.513 [2024-07-15 20:20:18.068322] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x166af20 was disconnected and freed. reset controller. 00:25:27.513 [2024-07-15 20:20:18.068332] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:27.513 [2024-07-15 20:20:18.068340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.513 [2024-07-15 20:20:18.071889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.513 [2024-07-15 20:20:18.071913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1646ef0 (9): Bad file descriptor 00:25:27.513 [2024-07-15 20:20:18.109027] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:27.513 00:25:27.513 Latency(us) 00:25:27.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.513 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:27.513 Verification LBA range: start 0x0 length 0x4000 00:25:27.513 NVMe0n1 : 15.01 11785.29 46.04 795.25 0.00 10146.90 1058.13 20097.71 00:25:27.513 =================================================================================================================== 00:25:27.513 Total : 11785.29 46.04 795.25 0.00 10146.90 1058.13 20097.71 00:25:27.513 Received shutdown signal, test time was about 15.000000 seconds 00:25:27.513 00:25:27.513 Latency(us) 00:25:27.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.513 =================================================================================================================== 00:25:27.513 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:27.513 20:20:24 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:27.513 20:20:24 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:27.513 20:20:24 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:27.513 20:20:24 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1105500 00:25:27.513 20:20:24 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1105500 /var/tmp/bdevperf.sock 00:25:27.513 20:20:24 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:27.513 20:20:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1105500 ']' 00:25:27.513 20:20:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:27.514 20:20:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:27.514 20:20:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:27.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:27.514 20:20:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:27.514 20:20:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:28.084 20:20:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:28.084 20:20:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:28.084 20:20:25 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:28.084 [2024-07-15 20:20:25.394230] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:28.084 20:20:25 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:28.344 [2024-07-15 20:20:25.558638] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:28.344 20:20:25 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:28.604 NVMe0n1 00:25:28.604 20:20:25 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:28.865 00:25:28.865 20:20:26 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:29.162 00:25:29.162 20:20:26 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:29.162 20:20:26 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:29.422 20:20:26 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:29.422 20:20:26 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:32.721 20:20:29 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:32.721 20:20:29 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:32.721 20:20:29 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1106522 00:25:32.721 20:20:29 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:32.721 20:20:29 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1106522 00:25:33.663 0 00:25:33.663 20:20:31 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:33.663 [2024-07-15 20:20:24.482509] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:25:33.663 [2024-07-15 20:20:24.482569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1105500 ] 00:25:33.663 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.663 [2024-07-15 20:20:24.541496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.663 [2024-07-15 20:20:24.605200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.663 [2024-07-15 20:20:26.743409] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:33.663 [2024-07-15 20:20:26.743454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.663 [2024-07-15 20:20:26.743465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.663 [2024-07-15 20:20:26.743475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.663 [2024-07-15 20:20:26.743483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.663 [2024-07-15 20:20:26.743491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.663 [2024-07-15 20:20:26.743498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.663 [2024-07-15 20:20:26.743506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.663 [2024-07-15 20:20:26.743513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.663 [2024-07-15 20:20:26.743520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.663 [2024-07-15 20:20:26.743547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.663 [2024-07-15 20:20:26.743562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233cef0 (9): Bad file descriptor 00:25:33.663 [2024-07-15 20:20:26.798696] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:33.663 Running I/O for 1 seconds... 00:25:33.663 00:25:33.663 Latency(us) 00:25:33.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.663 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:33.663 Verification LBA range: start 0x0 length 0x4000 00:25:33.663 NVMe0n1 : 1.04 11567.78 45.19 0.00 0.00 10607.20 1160.53 42161.49 00:25:33.663 =================================================================================================================== 00:25:33.663 Total : 11567.78 45.19 0.00 0.00 10607.20 1160.53 42161.49 00:25:33.924 20:20:31 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:33.924 20:20:31 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:33.924 20:20:31 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:34.185 20:20:31 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:34.185 20:20:31 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:34.185 20:20:31 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:34.446 20:20:31 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:37.745 20:20:34 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:37.745 20:20:34 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:37.745 20:20:34 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1105500 00:25:37.745 20:20:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1105500 ']' 00:25:37.745 20:20:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1105500 00:25:37.745 20:20:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:37.745 20:20:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:37.745 20:20:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1105500 00:25:37.745 20:20:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:37.745 20:20:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:37.745 20:20:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1105500' 00:25:37.745 killing process with pid 1105500 00:25:37.745 20:20:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1105500 00:25:37.745 20:20:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1105500 00:25:37.745 20:20:35 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:37.745 20:20:35 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:38.004 rmmod nvme_tcp 00:25:38.004 rmmod nvme_fabrics 00:25:38.004 rmmod nvme_keyring 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1101788 ']' 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1101788 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1101788 ']' 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1101788 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1101788 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1101788' 00:25:38.004 killing process with pid 1101788 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1101788 00:25:38.004 20:20:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1101788 00:25:38.263 20:20:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:38.263 20:20:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:38.263 20:20:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:38.263 20:20:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:38.263 20:20:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:38.263 20:20:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.263 20:20:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:38.263 20:20:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.807 20:20:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:40.807 00:25:40.807 real 0m39.450s 00:25:40.807 user 2m1.902s 00:25:40.807 sys 0m8.101s 00:25:40.807 20:20:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:40.807 20:20:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:40.807 ************************************ 00:25:40.807 END TEST nvmf_failover 00:25:40.807 ************************************ 00:25:40.807 20:20:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:40.807 20:20:37 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:40.807 20:20:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:40.807 20:20:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.807 20:20:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:40.807 ************************************ 00:25:40.807 START TEST nvmf_host_discovery 00:25:40.807 ************************************ 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:40.807 * Looking for test storage... 00:25:40.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.807 20:20:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:40.808 20:20:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:47.401 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:47.401 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:47.401 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:47.401 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:47.401 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:47.402 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:47.402 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:47.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:25:47.663 00:25:47.663 --- 10.0.0.2 ping statistics --- 00:25:47.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.663 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:47.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.391 ms 00:25:47.663 00:25:47.663 --- 10.0.0.1 ping statistics --- 00:25:47.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.663 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1111530 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1111530 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1111530 ']' 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:47.663 20:20:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.663 [2024-07-15 20:20:44.962551] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:25:47.663 [2024-07-15 20:20:44.962607] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.663 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.663 [2024-07-15 20:20:45.047492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.925 [2024-07-15 20:20:45.135345] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.925 [2024-07-15 20:20:45.135402] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.925 [2024-07-15 20:20:45.135410] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.925 [2024-07-15 20:20:45.135417] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.925 [2024-07-15 20:20:45.135423] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.925 [2024-07-15 20:20:45.135447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.498 [2024-07-15 20:20:45.790696] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.498 [2024-07-15 20:20:45.798879] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.498 null0 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.498 null1 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1111872 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1111872 /tmp/host.sock 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1111872 ']' 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:48.498 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:48.498 20:20:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.498 [2024-07-15 20:20:45.888809] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:25:48.498 [2024-07-15 20:20:45.888870] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1111872 ] 00:25:48.498 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.759 [2024-07-15 20:20:45.952384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.759 [2024-07-15 20:20:46.027274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:49.330 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.591 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:49.591 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:49.592 20:20:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.592 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:49.592 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:49.592 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.592 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.592 [2024-07-15 20:20:47.021996] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:25:49.854 20:20:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:50.426 [2024-07-15 20:20:47.688662] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:50.426 [2024-07-15 20:20:47.688684] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:50.426 [2024-07-15 20:20:47.688698] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:50.426 [2024-07-15 20:20:47.776986] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:50.687 [2024-07-15 20:20:47.964971] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:50.687 [2024-07-15 20:20:47.964994] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.948 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:51.209 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.209 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:25:51.209 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.210 [2024-07-15 20:20:48.590142] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:51.210 [2024-07-15 20:20:48.590333] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:51.210 [2024-07-15 20:20:48.590358] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:51.210 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.471 [2024-07-15 20:20:48.677575] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:51.471 20:20:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:51.732 [2024-07-15 20:20:48.950051] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:51.732 [2024-07-15 20:20:48.950074] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:51.732 [2024-07-15 20:20:48.950080] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.676 [2024-07-15 20:20:49.874598] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:52.676 [2024-07-15 20:20:49.874620] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:52.676 [2024-07-15 20:20:49.876830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.676 [2024-07-15 20:20:49.876852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.676 [2024-07-15 20:20:49.876862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.676 [2024-07-15 20:20:49.876869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.676 [2024-07-15 20:20:49.876876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.676 [2024-07-15 20:20:49.876883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.676 [2024-07-15 20:20:49.876891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.676 [2024-07-15 20:20:49.876898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.676 [2024-07-15 20:20:49.876905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9669b0 is same with the state(5) to be set 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:52.676 [2024-07-15 20:20:49.886844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9669b0 (9): Bad file descriptor 00:25:52.676 [2024-07-15 20:20:49.896882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:52.676 [2024-07-15 20:20:49.897380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.676 [2024-07-15 20:20:49.897416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9669b0 with addr=10.0.0.2, port=4420 00:25:52.676 [2024-07-15 20:20:49.897427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9669b0 is same with the state(5) to be set 00:25:52.676 [2024-07-15 20:20:49.897445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9669b0 (9): Bad file descriptor 00:25:52.676 [2024-07-15 20:20:49.897456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:52.676 [2024-07-15 20:20:49.897463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:52.676 [2024-07-15 20:20:49.897471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:52.676 [2024-07-15 20:20:49.897486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.676 [2024-07-15 20:20:49.906938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:52.676 [2024-07-15 20:20:49.907435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.676 [2024-07-15 20:20:49.907472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9669b0 with addr=10.0.0.2, port=4420 00:25:52.676 [2024-07-15 20:20:49.907483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9669b0 is same with the state(5) to be set 00:25:52.676 [2024-07-15 20:20:49.907503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9669b0 (9): Bad file descriptor 00:25:52.676 [2024-07-15 20:20:49.907516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:52.676 [2024-07-15 20:20:49.907524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:52.676 [2024-07-15 20:20:49.907533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:52.676 [2024-07-15 20:20:49.907549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.676 [2024-07-15 20:20:49.916992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:52.676 [2024-07-15 20:20:49.917453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.676 [2024-07-15 20:20:49.917490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9669b0 with addr=10.0.0.2, port=4420 00:25:52.676 [2024-07-15 20:20:49.917501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9669b0 is same with the state(5) to be set 00:25:52.676 [2024-07-15 20:20:49.917518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9669b0 (9): Bad file descriptor 00:25:52.676 [2024-07-15 20:20:49.917530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:52.676 [2024-07-15 20:20:49.917536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:52.676 [2024-07-15 20:20:49.917544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:52.676 [2024-07-15 20:20:49.917559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.676 [2024-07-15 20:20:49.927048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:52.676 [2024-07-15 20:20:49.927511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.676 [2024-07-15 20:20:49.927525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9669b0 with addr=10.0.0.2, port=4420 00:25:52.676 [2024-07-15 20:20:49.927533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9669b0 is same with the state(5) to be set 00:25:52.676 [2024-07-15 20:20:49.927544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9669b0 (9): Bad file descriptor 00:25:52.676 [2024-07-15 20:20:49.927554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:52.676 [2024-07-15 20:20:49.927560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:52.676 [2024-07-15 20:20:49.927567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:52.676 [2024-07-15 20:20:49.927577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:52.676 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:52.676 [2024-07-15 20:20:49.937105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:52.677 [2024-07-15 20:20:49.938319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.677 [2024-07-15 20:20:49.938340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9669b0 with addr=10.0.0.2, port=4420 00:25:52.677 [2024-07-15 20:20:49.938348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9669b0 is same with the state(5) to be set 00:25:52.677 [2024-07-15 20:20:49.938363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9669b0 (9): Bad file descriptor 00:25:52.677 [2024-07-15 20:20:49.938383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:52.677 [2024-07-15 20:20:49.938390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:52.677 [2024-07-15 20:20:49.938397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:52.677 [2024-07-15 20:20:49.938410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.677 [2024-07-15 20:20:49.947156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:52.677 [2024-07-15 20:20:49.947578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.677 [2024-07-15 20:20:49.947591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9669b0 with addr=10.0.0.2, port=4420 00:25:52.677 [2024-07-15 20:20:49.947599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9669b0 is same with the state(5) to be set 00:25:52.677 [2024-07-15 20:20:49.947610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9669b0 (9): Bad file descriptor 00:25:52.677 [2024-07-15 20:20:49.947628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:52.677 [2024-07-15 20:20:49.947635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:52.677 [2024-07-15 20:20:49.947642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:52.677 [2024-07-15 20:20:49.947652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.677 [2024-07-15 20:20:49.957212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:52.677 [2024-07-15 20:20:49.957649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.677 [2024-07-15 20:20:49.957660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9669b0 with addr=10.0.0.2, port=4420 00:25:52.677 [2024-07-15 20:20:49.957667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9669b0 is same with the state(5) to be set 00:25:52.677 [2024-07-15 20:20:49.957678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9669b0 (9): Bad file descriptor 00:25:52.677 [2024-07-15 20:20:49.957698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:52.677 [2024-07-15 20:20:49.957705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:52.677 [2024-07-15 20:20:49.957712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:52.677 [2024-07-15 20:20:49.957722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.677 [2024-07-15 20:20:49.962713] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:52.677 [2024-07-15 20:20:49.962731] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.677 20:20:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.677 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:52.938 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.939 20:20:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.366 [2024-07-15 20:20:51.329316] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:54.366 [2024-07-15 20:20:51.329335] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:54.366 [2024-07-15 20:20:51.329348] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:54.366 [2024-07-15 20:20:51.416624] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:54.366 [2024-07-15 20:20:51.687348] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:54.366 [2024-07-15 20:20:51.687378] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:54.366 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.366 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:54.366 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:54.366 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:54.366 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:54.366 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:54.366 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:54.366 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:54.366 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:54.366 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.366 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.366 request: 00:25:54.366 { 00:25:54.366 "name": "nvme", 00:25:54.366 "trtype": "tcp", 00:25:54.366 "traddr": "10.0.0.2", 00:25:54.366 "adrfam": "ipv4", 00:25:54.366 "trsvcid": "8009", 00:25:54.366 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:54.366 "wait_for_attach": true, 00:25:54.366 "method": "bdev_nvme_start_discovery", 00:25:54.366 "req_id": 1 00:25:54.366 } 00:25:54.366 Got JSON-RPC error response 00:25:54.366 response: 00:25:54.366 { 00:25:54.366 "code": -17, 00:25:54.366 "message": "File exists" 00:25:54.366 } 00:25:54.366 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:54.366 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:54.366 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:54.366 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:54.366 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:54.366 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:54.366 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:54.367 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:54.367 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.367 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:54.367 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.367 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:54.367 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.367 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:54.367 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:54.367 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.367 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.367 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:54.367 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:54.367 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.367 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.652 request: 00:25:54.652 { 00:25:54.652 "name": "nvme_second", 00:25:54.652 "trtype": "tcp", 00:25:54.652 "traddr": "10.0.0.2", 00:25:54.652 "adrfam": "ipv4", 00:25:54.652 "trsvcid": "8009", 00:25:54.652 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:54.652 "wait_for_attach": true, 00:25:54.652 "method": "bdev_nvme_start_discovery", 00:25:54.652 "req_id": 1 00:25:54.652 } 00:25:54.652 Got JSON-RPC error response 00:25:54.652 response: 00:25:54.652 { 00:25:54.652 "code": -17, 00:25:54.652 "message": "File exists" 00:25:54.652 } 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.652 20:20:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.592 [2024-07-15 20:20:52.946810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.592 [2024-07-15 20:20:52.946838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a4ec0 with addr=10.0.0.2, port=8010 00:25:55.592 [2024-07-15 20:20:52.946851] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:55.592 [2024-07-15 20:20:52.946858] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:55.592 [2024-07-15 20:20:52.946865] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:56.531 [2024-07-15 20:20:53.949158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-07-15 20:20:53.949180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a4ec0 with addr=10.0.0.2, port=8010 00:25:56.531 [2024-07-15 20:20:53.949192] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:56.531 [2024-07-15 20:20:53.949203] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:56.531 [2024-07-15 20:20:53.949209] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:57.913 [2024-07-15 20:20:54.951196] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:57.913 request: 00:25:57.913 { 00:25:57.913 "name": "nvme_second", 00:25:57.913 "trtype": "tcp", 00:25:57.913 "traddr": "10.0.0.2", 00:25:57.913 "adrfam": "ipv4", 00:25:57.913 "trsvcid": "8010", 00:25:57.913 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:57.913 "wait_for_attach": false, 00:25:57.913 "attach_timeout_ms": 3000, 00:25:57.913 "method": "bdev_nvme_start_discovery", 00:25:57.913 "req_id": 1 00:25:57.913 } 00:25:57.913 Got JSON-RPC error response 00:25:57.913 response: 00:25:57.913 { 00:25:57.913 "code": -110, 00:25:57.913 "message": "Connection timed out" 00:25:57.913 } 00:25:57.913 20:20:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:57.913 20:20:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:57.913 20:20:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:57.913 20:20:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:57.913 20:20:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:57.913 20:20:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:57.913 20:20:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:57.913 20:20:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:57.913 20:20:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.913 20:20:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:57.913 20:20:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.913 20:20:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:57.913 20:20:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1111872 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:57.913 rmmod nvme_tcp 00:25:57.913 rmmod nvme_fabrics 00:25:57.913 rmmod nvme_keyring 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1111530 ']' 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1111530 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1111530 ']' 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1111530 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1111530 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1111530' 00:25:57.913 killing process with pid 1111530 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1111530 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1111530 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:57.913 20:20:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.461 20:20:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:00.462 00:26:00.462 real 0m19.613s 00:26:00.462 user 0m23.320s 00:26:00.462 sys 0m6.658s 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.462 ************************************ 00:26:00.462 END TEST nvmf_host_discovery 00:26:00.462 ************************************ 00:26:00.462 20:20:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:00.462 20:20:57 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:00.462 20:20:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:00.462 20:20:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:00.462 20:20:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:00.462 ************************************ 00:26:00.462 START TEST nvmf_host_multipath_status 00:26:00.462 ************************************ 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:00.462 * Looking for test storage... 00:26:00.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:00.462 20:20:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:07.052 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:07.052 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:07.052 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:07.052 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:07.052 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:07.052 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:07.052 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:07.052 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:07.052 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:07.052 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:07.052 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:07.052 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:07.053 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:07.053 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:07.053 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:07.053 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:07.053 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:07.315 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:07.315 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.315 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:07.315 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.315 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:07.315 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:07.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:26:07.316 00:26:07.316 --- 10.0.0.2 ping statistics --- 00:26:07.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.316 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:07.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.419 ms 00:26:07.316 00:26:07.316 --- 10.0.0.1 ping statistics --- 00:26:07.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.316 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1117958 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1117958 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1117958 ']' 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:07.316 20:21:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:07.576 [2024-07-15 20:21:04.758751] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:26:07.576 [2024-07-15 20:21:04.758818] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.576 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.576 [2024-07-15 20:21:04.829063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:07.576 [2024-07-15 20:21:04.903899] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.576 [2024-07-15 20:21:04.903940] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.576 [2024-07-15 20:21:04.903948] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.576 [2024-07-15 20:21:04.903954] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.576 [2024-07-15 20:21:04.903960] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.576 [2024-07-15 20:21:04.904101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.576 [2024-07-15 20:21:04.904104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.147 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:08.147 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:08.147 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:08.147 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:08.147 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:08.147 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:08.148 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1117958 00:26:08.148 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:08.409 [2024-07-15 20:21:05.708096] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.409 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:08.670 Malloc0 00:26:08.670 20:21:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:08.670 20:21:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:08.931 20:21:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:08.931 [2024-07-15 20:21:06.306731] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.931 20:21:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:09.192 [2024-07-15 20:21:06.459072] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:09.192 20:21:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1118310 00:26:09.192 20:21:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:09.192 20:21:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:09.192 20:21:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1118310 /var/tmp/bdevperf.sock 00:26:09.192 20:21:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1118310 ']' 00:26:09.192 20:21:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:09.192 20:21:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:09.192 20:21:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:09.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:09.192 20:21:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:09.192 20:21:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:10.133 20:21:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:10.133 20:21:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:10.133 20:21:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:10.133 20:21:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:10.393 Nvme0n1 00:26:10.393 20:21:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:10.653 Nvme0n1 00:26:10.653 20:21:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:10.653 20:21:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:13.194 20:21:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:13.194 20:21:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:13.194 20:21:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:13.194 20:21:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:14.136 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:14.136 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:14.136 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.136 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:14.398 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.398 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:14.398 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.398 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:14.398 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:14.398 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:14.398 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.398 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:14.659 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.659 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:14.659 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.659 20:21:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:14.920 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.920 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:14.920 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.920 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:14.920 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.920 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:14.920 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.920 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:15.181 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.181 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:15.181 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:15.181 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:15.442 20:21:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:16.396 20:21:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:16.396 20:21:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:16.396 20:21:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.396 20:21:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:16.657 20:21:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:16.657 20:21:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:16.657 20:21:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.657 20:21:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:16.917 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.917 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:16.917 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.917 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:16.917 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.917 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:16.917 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.917 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:17.178 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.178 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:17.178 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.178 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:17.442 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.442 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:17.442 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.442 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:17.442 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.442 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:17.442 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:17.745 20:21:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:17.745 20:21:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:18.729 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:18.729 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:18.729 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.729 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:18.990 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.990 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:18.990 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.990 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:19.250 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:19.250 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:19.250 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.250 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:19.250 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.250 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:19.250 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.250 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:19.515 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.515 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:19.515 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.515 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:19.776 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.776 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:19.776 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:19.776 20:21:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.776 20:21:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.776 20:21:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:19.776 20:21:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:20.037 20:21:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:20.037 20:21:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:21.424 20:21:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:21.424 20:21:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:21.424 20:21:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.424 20:21:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:21.424 20:21:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.424 20:21:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:21.424 20:21:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.424 20:21:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:21.424 20:21:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:21.424 20:21:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:21.424 20:21:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.424 20:21:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:21.686 20:21:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.687 20:21:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:21.687 20:21:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.687 20:21:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:21.687 20:21:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.949 20:21:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:21.949 20:21:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.949 20:21:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:21.949 20:21:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.949 20:21:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:21.949 20:21:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.949 20:21:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:22.210 20:21:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.210 20:21:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:22.210 20:21:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:22.210 20:21:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:22.470 20:21:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:23.412 20:21:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:23.413 20:21:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:23.413 20:21:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.413 20:21:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:23.673 20:21:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:23.673 20:21:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:23.673 20:21:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.673 20:21:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:23.934 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:23.934 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:23.934 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.934 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:23.934 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.934 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:23.934 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:23.934 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.195 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.195 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:24.195 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.195 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:24.456 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.456 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:24.456 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.456 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:24.456 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.456 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:24.456 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:24.717 20:21:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:24.717 20:21:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:26.104 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:26.104 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:26.104 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.104 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:26.104 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:26.104 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:26.104 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.104 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:26.104 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.104 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:26.104 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.104 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:26.364 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.364 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:26.364 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.364 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:26.629 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.629 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:26.629 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.629 20:21:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:26.629 20:21:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:26.629 20:21:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:26.629 20:21:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.629 20:21:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:26.894 20:21:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.894 20:21:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:27.154 20:21:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:27.154 20:21:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:27.154 20:21:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:27.415 20:21:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:28.357 20:21:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:28.357 20:21:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:28.357 20:21:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:28.357 20:21:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.619 20:21:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.619 20:21:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:28.619 20:21:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.619 20:21:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:28.619 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.619 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:28.619 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.619 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:28.879 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.879 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:28.879 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.879 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:29.140 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.140 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:29.140 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.140 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:29.140 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.140 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:29.140 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.140 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:29.399 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.399 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:29.400 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:29.659 20:21:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:29.659 20:21:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:31.042 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:31.042 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:31.042 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.042 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:31.042 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:31.042 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:31.042 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.042 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:31.042 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.042 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:31.042 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.042 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:31.302 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.302 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:31.302 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.302 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:31.302 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.302 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:31.302 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.302 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:31.562 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.562 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:31.562 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.562 20:21:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:31.822 20:21:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.822 20:21:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:31.822 20:21:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:31.822 20:21:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:32.081 20:21:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:33.022 20:21:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:33.022 20:21:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:33.022 20:21:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.022 20:21:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:33.283 20:21:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.283 20:21:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:33.283 20:21:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:33.283 20:21:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.544 20:21:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.544 20:21:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:33.544 20:21:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.544 20:21:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:33.544 20:21:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.544 20:21:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:33.544 20:21:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.544 20:21:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:33.825 20:21:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.825 20:21:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:33.825 20:21:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.825 20:21:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:33.825 20:21:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.825 20:21:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:33.825 20:21:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.825 20:21:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:34.086 20:21:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.086 20:21:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:34.086 20:21:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:34.347 20:21:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:34.347 20:21:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:35.731 20:21:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:35.731 20:21:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:35.731 20:21:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.731 20:21:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:35.731 20:21:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.731 20:21:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:35.731 20:21:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.731 20:21:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:35.731 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:35.731 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:35.731 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.731 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:35.995 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.995 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:35.995 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.995 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:35.995 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.995 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:35.995 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:35.995 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.292 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.292 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:36.292 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.292 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:36.553 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:36.553 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1118310 00:26:36.553 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1118310 ']' 00:26:36.553 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1118310 00:26:36.553 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:36.553 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:36.553 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1118310 00:26:36.553 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:36.553 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:36.553 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1118310' 00:26:36.553 killing process with pid 1118310 00:26:36.553 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1118310 00:26:36.553 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1118310 00:26:36.553 Connection closed with partial response: 00:26:36.553 00:26:36.553 00:26:36.553 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1118310 00:26:36.553 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:36.553 [2024-07-15 20:21:06.519322] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:26:36.553 [2024-07-15 20:21:06.519381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1118310 ] 00:26:36.553 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.553 [2024-07-15 20:21:06.569314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.553 [2024-07-15 20:21:06.621289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.553 Running I/O for 90 seconds... 00:26:36.553 [2024-07-15 20:21:19.609154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.553 [2024-07-15 20:21:19.609187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:36.553 [2024-07-15 20:21:19.609223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.553 [2024-07-15 20:21:19.609230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:36.553 [2024-07-15 20:21:19.609241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.553 [2024-07-15 20:21:19.609247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:36.553 [2024-07-15 20:21:19.609257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.553 [2024-07-15 20:21:19.609262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:36.553 [2024-07-15 20:21:19.609272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.553 [2024-07-15 20:21:19.609277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:36.553 [2024-07-15 20:21:19.609288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.553 [2024-07-15 20:21:19.609293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:36.553 [2024-07-15 20:21:19.609303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.553 [2024-07-15 20:21:19.609308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.553 [2024-07-15 20:21:19.609318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.553 [2024-07-15 20:21:19.609323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.553 [2024-07-15 20:21:19.609334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.553 [2024-07-15 20:21:19.609339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:36.553 [2024-07-15 20:21:19.609349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.553 [2024-07-15 20:21:19.609354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:36.553 [2024-07-15 20:21:19.609364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.553 [2024-07-15 20:21:19.609375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:36.553 [2024-07-15 20:21:19.609385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.554 [2024-07-15 20:21:19.609644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.609984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.609996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.610001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.610012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.610017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.610029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.610034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.610046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.610051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.610214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.610220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.610233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.610239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.610251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.610257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.610269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.610274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.610287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.610292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.610304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.610309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.610322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.610327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.610340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.610346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.610431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.610437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.610451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.610456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.610469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.610474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.610487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.610492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.610505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.554 [2024-07-15 20:21:19.610510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:36.554 [2024-07-15 20:21:19.610523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.610529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.610542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.610547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.610560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.610565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.610728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.610734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.610748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.610753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.610767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.610771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.610785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.610792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.610805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.610810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.610823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.610829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.610842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.610847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.610861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.610866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.611729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.611734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.612324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.612331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.612346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.555 [2024-07-15 20:21:19.612352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.612367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.555 [2024-07-15 20:21:19.612372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.612388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.555 [2024-07-15 20:21:19.612393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.612408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.555 [2024-07-15 20:21:19.612414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.612429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.555 [2024-07-15 20:21:19.612434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:36.555 [2024-07-15 20:21:19.612449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.555 [2024-07-15 20:21:19.612455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.612476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.612497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.556 [2024-07-15 20:21:19.612518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.556 [2024-07-15 20:21:19.612539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.556 [2024-07-15 20:21:19.612559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.556 [2024-07-15 20:21:19.612580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.556 [2024-07-15 20:21:19.612601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.612621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.612642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.612662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.612716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.612739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.612760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.612783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.612804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.556 [2024-07-15 20:21:19.612826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.556 [2024-07-15 20:21:19.612882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.612905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.612927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.612949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.612971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.612988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.612994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.613010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.613015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.613032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.613037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.613054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.613059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.613076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.613083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.613100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.613105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.613124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.613130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.613146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.613151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.613168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.613173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.613190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.613195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.613211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.613216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:19.613233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:19.613238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:31.714036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:31.714070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:31.714102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:31.714109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:31.714120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:31.714129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:31.714144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:31.714149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:31.714159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.556 [2024-07-15 20:21:31.714164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:31.714180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.556 [2024-07-15 20:21:31.714186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:31.714196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.556 [2024-07-15 20:21:31.714201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:31.714211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.556 [2024-07-15 20:21:31.714216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:36.556 [2024-07-15 20:21:31.714227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.557 [2024-07-15 20:21:31.714232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:36.557 [2024-07-15 20:21:31.714242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.557 [2024-07-15 20:21:31.714247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:36.557 [2024-07-15 20:21:31.714257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.557 [2024-07-15 20:21:31.714262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:36.557 [2024-07-15 20:21:31.714273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.557 [2024-07-15 20:21:31.714278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:36.557 [2024-07-15 20:21:31.714288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.557 [2024-07-15 20:21:31.714293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:36.557 [2024-07-15 20:21:31.714303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.557 [2024-07-15 20:21:31.714309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:36.557 [2024-07-15 20:21:31.714319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.557 [2024-07-15 20:21:31.714325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:36.557 [2024-07-15 20:21:31.714446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.557 [2024-07-15 20:21:31.714454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:36.557 [2024-07-15 20:21:31.714465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.557 [2024-07-15 20:21:31.714471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:36.557 [2024-07-15 20:21:31.714565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.557 [2024-07-15 20:21:31.714572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:36.557 [2024-07-15 20:21:31.714583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.557 [2024-07-15 20:21:31.714588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:36.557 [2024-07-15 20:21:31.714598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.557 [2024-07-15 20:21:31.714603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:36.557 [2024-07-15 20:21:31.714614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.557 [2024-07-15 20:21:31.714619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:36.557 [2024-07-15 20:21:31.715090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.557 [2024-07-15 20:21:31.715100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:36.557 Received shutdown signal, test time was about 25.615852 seconds 00:26:36.557 00:26:36.557 Latency(us) 00:26:36.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.557 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:36.557 Verification LBA range: start 0x0 length 0x4000 00:26:36.557 Nvme0n1 : 25.62 11150.08 43.56 0.00 0.00 11460.68 440.32 3019898.88 00:26:36.557 =================================================================================================================== 00:26:36.557 Total : 11150.08 43.56 0.00 0.00 11460.68 440.32 3019898.88 00:26:36.557 20:21:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:36.817 rmmod nvme_tcp 00:26:36.817 rmmod nvme_fabrics 00:26:36.817 rmmod nvme_keyring 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1117958 ']' 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1117958 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1117958 ']' 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1117958 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1117958 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1117958' 00:26:36.817 killing process with pid 1117958 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1117958 00:26:36.817 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1117958 00:26:37.077 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:37.077 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:37.077 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:37.077 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:37.077 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:37.077 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.077 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:37.077 20:21:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.620 20:21:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:39.620 00:26:39.620 real 0m39.028s 00:26:39.620 user 1m40.762s 00:26:39.620 sys 0m10.650s 00:26:39.620 20:21:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:39.620 20:21:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:39.620 ************************************ 00:26:39.620 END TEST nvmf_host_multipath_status 00:26:39.620 ************************************ 00:26:39.620 20:21:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:39.620 20:21:36 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:39.620 20:21:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:39.620 20:21:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:39.620 20:21:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:39.620 ************************************ 00:26:39.620 START TEST nvmf_discovery_remove_ifc 00:26:39.620 ************************************ 00:26:39.620 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:39.620 * Looking for test storage... 00:26:39.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:39.620 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.620 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:39.620 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.620 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:39.621 20:21:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:46.209 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:46.209 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:46.209 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:46.209 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:46.209 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:46.210 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:46.210 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:46.210 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:46.210 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:46.210 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:46.210 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:46.210 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:46.210 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:46.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:46.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:26:46.470 00:26:46.470 --- 10.0.0.2 ping statistics --- 00:26:46.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.470 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:46.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:46.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:26:46.470 00:26:46.470 --- 10.0.0.1 ping statistics --- 00:26:46.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.470 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1128507 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1128507 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1128507 ']' 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:46.470 20:21:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.471 [2024-07-15 20:21:43.819578] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:26:46.471 [2024-07-15 20:21:43.819643] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.471 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.731 [2024-07-15 20:21:43.905128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.731 [2024-07-15 20:21:43.996990] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.731 [2024-07-15 20:21:43.997046] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.731 [2024-07-15 20:21:43.997053] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:46.731 [2024-07-15 20:21:43.997060] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:46.731 [2024-07-15 20:21:43.997066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.731 [2024-07-15 20:21:43.997097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.301 [2024-07-15 20:21:44.654961] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.301 [2024-07-15 20:21:44.663140] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:47.301 null0 00:26:47.301 [2024-07-15 20:21:44.695137] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1128560 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1128560 /tmp/host.sock 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1128560 ']' 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:47.301 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:47.301 20:21:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.561 [2024-07-15 20:21:44.771965] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:26:47.561 [2024-07-15 20:21:44.772026] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128560 ] 00:26:47.561 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.561 [2024-07-15 20:21:44.835770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.561 [2024-07-15 20:21:44.910607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.131 20:21:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:48.132 20:21:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:48.132 20:21:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:48.132 20:21:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:48.132 20:21:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.132 20:21:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.132 20:21:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.132 20:21:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:48.132 20:21:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.132 20:21:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.393 20:21:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.393 20:21:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:48.393 20:21:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.393 20:21:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.335 [2024-07-15 20:21:46.665336] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:49.335 [2024-07-15 20:21:46.665357] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:49.335 [2024-07-15 20:21:46.665371] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:49.335 [2024-07-15 20:21:46.754655] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:49.596 [2024-07-15 20:21:46.938614] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:49.596 [2024-07-15 20:21:46.938662] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:49.596 [2024-07-15 20:21:46.938686] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:49.596 [2024-07-15 20:21:46.938699] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:49.596 [2024-07-15 20:21:46.938721] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:49.596 20:21:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.596 20:21:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:49.596 20:21:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:49.596 [2024-07-15 20:21:46.943375] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x17187b0 was disconnected and freed. delete nvme_qpair. 00:26:49.596 20:21:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.596 20:21:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:49.596 20:21:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.596 20:21:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:49.596 20:21:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.596 20:21:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:49.596 20:21:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.596 20:21:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:49.596 20:21:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:49.596 20:21:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:49.860 20:21:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:49.860 20:21:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:49.860 20:21:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:49.860 20:21:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.861 20:21:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:49.861 20:21:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.861 20:21:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.861 20:21:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:49.861 20:21:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.861 20:21:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:49.861 20:21:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:50.805 20:21:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:50.805 20:21:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:50.805 20:21:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.805 20:21:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:50.805 20:21:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.805 20:21:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:50.805 20:21:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.805 20:21:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.805 20:21:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:50.805 20:21:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:52.192 20:21:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:52.192 20:21:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.192 20:21:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:52.192 20:21:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.192 20:21:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:52.192 20:21:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.192 20:21:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:52.192 20:21:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.192 20:21:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:52.192 20:21:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:53.134 20:21:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:53.134 20:21:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.134 20:21:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:53.134 20:21:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.134 20:21:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:53.134 20:21:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:53.134 20:21:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:53.134 20:21:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.134 20:21:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:53.134 20:21:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:54.075 20:21:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:54.075 20:21:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.075 20:21:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:54.075 20:21:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.075 20:21:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:54.075 20:21:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.075 20:21:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:54.075 20:21:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.075 20:21:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:54.075 20:21:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:55.019 [2024-07-15 20:21:52.379074] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:55.019 [2024-07-15 20:21:52.379118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.019 [2024-07-15 20:21:52.379135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.019 [2024-07-15 20:21:52.379145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.019 [2024-07-15 20:21:52.379153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.019 [2024-07-15 20:21:52.379161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.019 [2024-07-15 20:21:52.379168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.019 [2024-07-15 20:21:52.379176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.019 [2024-07-15 20:21:52.379183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.019 [2024-07-15 20:21:52.379191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.019 [2024-07-15 20:21:52.379198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.019 [2024-07-15 20:21:52.379205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16df040 is same with the state(5) to be set 00:26:55.019 [2024-07-15 20:21:52.389093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16df040 (9): Bad file descriptor 00:26:55.019 [2024-07-15 20:21:52.399137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:55.019 20:21:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:55.019 20:21:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:55.019 20:21:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:55.019 20:21:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:55.019 20:21:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:55.019 20:21:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.019 20:21:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.020 [2024-07-15 20:21:53.417161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:56.020 [2024-07-15 20:21:53.417202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16df040 with addr=10.0.0.2, port=4420 00:26:56.020 [2024-07-15 20:21:53.417214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16df040 is same with the state(5) to be set 00:26:56.020 [2024-07-15 20:21:53.417243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16df040 (9): Bad file descriptor 00:26:56.020 [2024-07-15 20:21:53.417611] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:56.020 [2024-07-15 20:21:53.417634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:56.020 [2024-07-15 20:21:53.417641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:56.020 [2024-07-15 20:21:53.417649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:56.020 [2024-07-15 20:21:53.417667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.020 [2024-07-15 20:21:53.417675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:56.020 20:21:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.020 20:21:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:56.020 20:21:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:57.405 [2024-07-15 20:21:54.420052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.405 [2024-07-15 20:21:54.420071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.405 [2024-07-15 20:21:54.420079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.405 [2024-07-15 20:21:54.420086] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:57.405 [2024-07-15 20:21:54.420099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.405 [2024-07-15 20:21:54.420118] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:57.405 [2024-07-15 20:21:54.420144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.405 [2024-07-15 20:21:54.420154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.405 [2024-07-15 20:21:54.420164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.405 [2024-07-15 20:21:54.420172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.405 [2024-07-15 20:21:54.420180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.405 [2024-07-15 20:21:54.420187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.405 [2024-07-15 20:21:54.420195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.405 [2024-07-15 20:21:54.420202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.405 [2024-07-15 20:21:54.420210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.405 [2024-07-15 20:21:54.420217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.405 [2024-07-15 20:21:54.420224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:57.405 [2024-07-15 20:21:54.420709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16de4c0 (9): Bad file descriptor 00:26:57.405 [2024-07-15 20:21:54.421720] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:57.405 [2024-07-15 20:21:54.421730] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:57.405 20:21:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:58.346 20:21:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:58.346 20:21:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.346 20:21:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:58.346 20:21:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.346 20:21:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:58.346 20:21:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.346 20:21:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:58.346 20:21:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.346 20:21:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:58.346 20:21:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:59.288 [2024-07-15 20:21:56.479423] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:59.288 [2024-07-15 20:21:56.479440] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:59.288 [2024-07-15 20:21:56.479453] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:59.288 [2024-07-15 20:21:56.608840] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:59.288 [2024-07-15 20:21:56.668767] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:59.288 [2024-07-15 20:21:56.668806] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:59.288 [2024-07-15 20:21:56.668828] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:59.288 [2024-07-15 20:21:56.668841] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:59.288 [2024-07-15 20:21:56.668854] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:59.288 [2024-07-15 20:21:56.676635] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x16f5310 was disconnected and freed. delete nvme_qpair. 00:26:59.288 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.288 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.288 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.288 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.288 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.288 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.288 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1128560 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1128560 ']' 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1128560 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1128560 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1128560' 00:26:59.549 killing process with pid 1128560 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1128560 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1128560 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:59.549 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:59.549 rmmod nvme_tcp 00:26:59.549 rmmod nvme_fabrics 00:26:59.549 rmmod nvme_keyring 00:26:59.811 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:59.811 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:59.811 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:59.811 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1128507 ']' 00:26:59.811 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1128507 00:26:59.811 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1128507 ']' 00:26:59.811 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1128507 00:26:59.811 20:21:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:59.811 20:21:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:59.811 20:21:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1128507 00:26:59.811 20:21:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:59.811 20:21:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:59.811 20:21:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1128507' 00:26:59.811 killing process with pid 1128507 00:26:59.811 20:21:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1128507 00:26:59.811 20:21:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1128507 00:26:59.811 20:21:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:59.811 20:21:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:59.811 20:21:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:59.811 20:21:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:59.811 20:21:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:59.811 20:21:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.811 20:21:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:59.811 20:21:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.359 20:21:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:02.359 00:27:02.359 real 0m22.721s 00:27:02.359 user 0m27.054s 00:27:02.359 sys 0m6.509s 00:27:02.359 20:21:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:02.359 20:21:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.359 ************************************ 00:27:02.359 END TEST nvmf_discovery_remove_ifc 00:27:02.359 ************************************ 00:27:02.359 20:21:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:02.359 20:21:59 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:02.359 20:21:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:02.359 20:21:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:02.359 20:21:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:02.359 ************************************ 00:27:02.359 START TEST nvmf_identify_kernel_target 00:27:02.359 ************************************ 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:02.359 * Looking for test storage... 00:27:02.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:02.359 20:21:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:08.978 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:08.978 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:08.978 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:08.979 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:08.979 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:08.979 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:09.240 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:09.240 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:09.240 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:09.240 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.240 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:09.240 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:09.240 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:09.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:27:09.240 00:27:09.240 --- 10.0.0.2 ping statistics --- 00:27:09.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.240 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:27:09.240 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:27:09.240 00:27:09.240 --- 10.0.0.1 ping statistics --- 00:27:09.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.240 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:27:09.240 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.240 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:09.240 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:09.240 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.240 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:09.240 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:09.240 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.240 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:09.240 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:09.501 20:22:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:12.805 Waiting for block devices as requested 00:27:12.805 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:12.805 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:13.065 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:13.065 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:13.065 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:13.326 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:13.326 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:13.326 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:13.586 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:13.586 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:13.586 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:13.846 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:13.846 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:13.846 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:14.105 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:14.105 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:14.105 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:14.365 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:14.365 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:14.365 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:14.366 No valid GPT data, bailing 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:14.366 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:14.627 00:27:14.627 Discovery Log Number of Records 2, Generation counter 2 00:27:14.627 =====Discovery Log Entry 0====== 00:27:14.627 trtype: tcp 00:27:14.627 adrfam: ipv4 00:27:14.627 subtype: current discovery subsystem 00:27:14.627 treq: not specified, sq flow control disable supported 00:27:14.627 portid: 1 00:27:14.627 trsvcid: 4420 00:27:14.627 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:14.627 traddr: 10.0.0.1 00:27:14.627 eflags: none 00:27:14.627 sectype: none 00:27:14.627 =====Discovery Log Entry 1====== 00:27:14.627 trtype: tcp 00:27:14.627 adrfam: ipv4 00:27:14.627 subtype: nvme subsystem 00:27:14.627 treq: not specified, sq flow control disable supported 00:27:14.627 portid: 1 00:27:14.627 trsvcid: 4420 00:27:14.627 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:14.627 traddr: 10.0.0.1 00:27:14.627 eflags: none 00:27:14.627 sectype: none 00:27:14.627 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:14.627 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:14.627 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.627 ===================================================== 00:27:14.627 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:14.627 ===================================================== 00:27:14.627 Controller Capabilities/Features 00:27:14.627 ================================ 00:27:14.627 Vendor ID: 0000 00:27:14.627 Subsystem Vendor ID: 0000 00:27:14.627 Serial Number: 21a41b53498861438fb5 00:27:14.627 Model Number: Linux 00:27:14.627 Firmware Version: 6.7.0-68 00:27:14.627 Recommended Arb Burst: 0 00:27:14.627 IEEE OUI Identifier: 00 00 00 00:27:14.627 Multi-path I/O 00:27:14.627 May have multiple subsystem ports: No 00:27:14.627 May have multiple controllers: No 00:27:14.627 Associated with SR-IOV VF: No 00:27:14.627 Max Data Transfer Size: Unlimited 00:27:14.627 Max Number of Namespaces: 0 00:27:14.627 Max Number of I/O Queues: 1024 00:27:14.627 NVMe Specification Version (VS): 1.3 00:27:14.627 NVMe Specification Version (Identify): 1.3 00:27:14.628 Maximum Queue Entries: 1024 00:27:14.628 Contiguous Queues Required: No 00:27:14.628 Arbitration Mechanisms Supported 00:27:14.628 Weighted Round Robin: Not Supported 00:27:14.628 Vendor Specific: Not Supported 00:27:14.628 Reset Timeout: 7500 ms 00:27:14.628 Doorbell Stride: 4 bytes 00:27:14.628 NVM Subsystem Reset: Not Supported 00:27:14.628 Command Sets Supported 00:27:14.628 NVM Command Set: Supported 00:27:14.628 Boot Partition: Not Supported 00:27:14.628 Memory Page Size Minimum: 4096 bytes 00:27:14.628 Memory Page Size Maximum: 4096 bytes 00:27:14.628 Persistent Memory Region: Not Supported 00:27:14.628 Optional Asynchronous Events Supported 00:27:14.628 Namespace Attribute Notices: Not Supported 00:27:14.628 Firmware Activation Notices: Not Supported 00:27:14.628 ANA Change Notices: Not Supported 00:27:14.628 PLE Aggregate Log Change Notices: Not Supported 00:27:14.628 LBA Status Info Alert Notices: Not Supported 00:27:14.628 EGE Aggregate Log Change Notices: Not Supported 00:27:14.628 Normal NVM Subsystem Shutdown event: Not Supported 00:27:14.628 Zone Descriptor Change Notices: Not Supported 00:27:14.628 Discovery Log Change Notices: Supported 00:27:14.628 Controller Attributes 00:27:14.628 128-bit Host Identifier: Not Supported 00:27:14.628 Non-Operational Permissive Mode: Not Supported 00:27:14.628 NVM Sets: Not Supported 00:27:14.628 Read Recovery Levels: Not Supported 00:27:14.628 Endurance Groups: Not Supported 00:27:14.628 Predictable Latency Mode: Not Supported 00:27:14.628 Traffic Based Keep ALive: Not Supported 00:27:14.628 Namespace Granularity: Not Supported 00:27:14.628 SQ Associations: Not Supported 00:27:14.628 UUID List: Not Supported 00:27:14.628 Multi-Domain Subsystem: Not Supported 00:27:14.628 Fixed Capacity Management: Not Supported 00:27:14.628 Variable Capacity Management: Not Supported 00:27:14.628 Delete Endurance Group: Not Supported 00:27:14.628 Delete NVM Set: Not Supported 00:27:14.628 Extended LBA Formats Supported: Not Supported 00:27:14.628 Flexible Data Placement Supported: Not Supported 00:27:14.628 00:27:14.628 Controller Memory Buffer Support 00:27:14.628 ================================ 00:27:14.628 Supported: No 00:27:14.628 00:27:14.628 Persistent Memory Region Support 00:27:14.628 ================================ 00:27:14.628 Supported: No 00:27:14.628 00:27:14.628 Admin Command Set Attributes 00:27:14.628 ============================ 00:27:14.628 Security Send/Receive: Not Supported 00:27:14.628 Format NVM: Not Supported 00:27:14.628 Firmware Activate/Download: Not Supported 00:27:14.628 Namespace Management: Not Supported 00:27:14.628 Device Self-Test: Not Supported 00:27:14.628 Directives: Not Supported 00:27:14.628 NVMe-MI: Not Supported 00:27:14.628 Virtualization Management: Not Supported 00:27:14.628 Doorbell Buffer Config: Not Supported 00:27:14.628 Get LBA Status Capability: Not Supported 00:27:14.628 Command & Feature Lockdown Capability: Not Supported 00:27:14.628 Abort Command Limit: 1 00:27:14.628 Async Event Request Limit: 1 00:27:14.628 Number of Firmware Slots: N/A 00:27:14.628 Firmware Slot 1 Read-Only: N/A 00:27:14.628 Firmware Activation Without Reset: N/A 00:27:14.628 Multiple Update Detection Support: N/A 00:27:14.628 Firmware Update Granularity: No Information Provided 00:27:14.628 Per-Namespace SMART Log: No 00:27:14.628 Asymmetric Namespace Access Log Page: Not Supported 00:27:14.628 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:14.628 Command Effects Log Page: Not Supported 00:27:14.628 Get Log Page Extended Data: Supported 00:27:14.628 Telemetry Log Pages: Not Supported 00:27:14.628 Persistent Event Log Pages: Not Supported 00:27:14.628 Supported Log Pages Log Page: May Support 00:27:14.628 Commands Supported & Effects Log Page: Not Supported 00:27:14.628 Feature Identifiers & Effects Log Page:May Support 00:27:14.628 NVMe-MI Commands & Effects Log Page: May Support 00:27:14.628 Data Area 4 for Telemetry Log: Not Supported 00:27:14.628 Error Log Page Entries Supported: 1 00:27:14.628 Keep Alive: Not Supported 00:27:14.628 00:27:14.628 NVM Command Set Attributes 00:27:14.628 ========================== 00:27:14.628 Submission Queue Entry Size 00:27:14.628 Max: 1 00:27:14.628 Min: 1 00:27:14.628 Completion Queue Entry Size 00:27:14.628 Max: 1 00:27:14.628 Min: 1 00:27:14.628 Number of Namespaces: 0 00:27:14.628 Compare Command: Not Supported 00:27:14.628 Write Uncorrectable Command: Not Supported 00:27:14.628 Dataset Management Command: Not Supported 00:27:14.628 Write Zeroes Command: Not Supported 00:27:14.628 Set Features Save Field: Not Supported 00:27:14.628 Reservations: Not Supported 00:27:14.628 Timestamp: Not Supported 00:27:14.628 Copy: Not Supported 00:27:14.628 Volatile Write Cache: Not Present 00:27:14.628 Atomic Write Unit (Normal): 1 00:27:14.628 Atomic Write Unit (PFail): 1 00:27:14.628 Atomic Compare & Write Unit: 1 00:27:14.628 Fused Compare & Write: Not Supported 00:27:14.628 Scatter-Gather List 00:27:14.628 SGL Command Set: Supported 00:27:14.628 SGL Keyed: Not Supported 00:27:14.628 SGL Bit Bucket Descriptor: Not Supported 00:27:14.628 SGL Metadata Pointer: Not Supported 00:27:14.628 Oversized SGL: Not Supported 00:27:14.628 SGL Metadata Address: Not Supported 00:27:14.628 SGL Offset: Supported 00:27:14.628 Transport SGL Data Block: Not Supported 00:27:14.628 Replay Protected Memory Block: Not Supported 00:27:14.628 00:27:14.628 Firmware Slot Information 00:27:14.628 ========================= 00:27:14.628 Active slot: 0 00:27:14.628 00:27:14.628 00:27:14.628 Error Log 00:27:14.628 ========= 00:27:14.628 00:27:14.628 Active Namespaces 00:27:14.628 ================= 00:27:14.628 Discovery Log Page 00:27:14.628 ================== 00:27:14.628 Generation Counter: 2 00:27:14.628 Number of Records: 2 00:27:14.628 Record Format: 0 00:27:14.628 00:27:14.628 Discovery Log Entry 0 00:27:14.628 ---------------------- 00:27:14.628 Transport Type: 3 (TCP) 00:27:14.628 Address Family: 1 (IPv4) 00:27:14.628 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:14.628 Entry Flags: 00:27:14.628 Duplicate Returned Information: 0 00:27:14.628 Explicit Persistent Connection Support for Discovery: 0 00:27:14.628 Transport Requirements: 00:27:14.628 Secure Channel: Not Specified 00:27:14.628 Port ID: 1 (0x0001) 00:27:14.628 Controller ID: 65535 (0xffff) 00:27:14.628 Admin Max SQ Size: 32 00:27:14.628 Transport Service Identifier: 4420 00:27:14.628 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:14.628 Transport Address: 10.0.0.1 00:27:14.628 Discovery Log Entry 1 00:27:14.628 ---------------------- 00:27:14.628 Transport Type: 3 (TCP) 00:27:14.628 Address Family: 1 (IPv4) 00:27:14.628 Subsystem Type: 2 (NVM Subsystem) 00:27:14.628 Entry Flags: 00:27:14.628 Duplicate Returned Information: 0 00:27:14.628 Explicit Persistent Connection Support for Discovery: 0 00:27:14.628 Transport Requirements: 00:27:14.628 Secure Channel: Not Specified 00:27:14.628 Port ID: 1 (0x0001) 00:27:14.628 Controller ID: 65535 (0xffff) 00:27:14.628 Admin Max SQ Size: 32 00:27:14.628 Transport Service Identifier: 4420 00:27:14.628 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:14.628 Transport Address: 10.0.0.1 00:27:14.628 20:22:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:14.628 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.628 get_feature(0x01) failed 00:27:14.628 get_feature(0x02) failed 00:27:14.628 get_feature(0x04) failed 00:27:14.628 ===================================================== 00:27:14.628 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:14.628 ===================================================== 00:27:14.628 Controller Capabilities/Features 00:27:14.628 ================================ 00:27:14.628 Vendor ID: 0000 00:27:14.628 Subsystem Vendor ID: 0000 00:27:14.628 Serial Number: 6e82fc836d5044d7d84c 00:27:14.628 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:14.628 Firmware Version: 6.7.0-68 00:27:14.628 Recommended Arb Burst: 6 00:27:14.628 IEEE OUI Identifier: 00 00 00 00:27:14.628 Multi-path I/O 00:27:14.628 May have multiple subsystem ports: Yes 00:27:14.628 May have multiple controllers: Yes 00:27:14.628 Associated with SR-IOV VF: No 00:27:14.628 Max Data Transfer Size: Unlimited 00:27:14.628 Max Number of Namespaces: 1024 00:27:14.628 Max Number of I/O Queues: 128 00:27:14.628 NVMe Specification Version (VS): 1.3 00:27:14.628 NVMe Specification Version (Identify): 1.3 00:27:14.628 Maximum Queue Entries: 1024 00:27:14.628 Contiguous Queues Required: No 00:27:14.628 Arbitration Mechanisms Supported 00:27:14.628 Weighted Round Robin: Not Supported 00:27:14.628 Vendor Specific: Not Supported 00:27:14.628 Reset Timeout: 7500 ms 00:27:14.628 Doorbell Stride: 4 bytes 00:27:14.628 NVM Subsystem Reset: Not Supported 00:27:14.628 Command Sets Supported 00:27:14.628 NVM Command Set: Supported 00:27:14.629 Boot Partition: Not Supported 00:27:14.629 Memory Page Size Minimum: 4096 bytes 00:27:14.629 Memory Page Size Maximum: 4096 bytes 00:27:14.629 Persistent Memory Region: Not Supported 00:27:14.629 Optional Asynchronous Events Supported 00:27:14.629 Namespace Attribute Notices: Supported 00:27:14.629 Firmware Activation Notices: Not Supported 00:27:14.629 ANA Change Notices: Supported 00:27:14.629 PLE Aggregate Log Change Notices: Not Supported 00:27:14.629 LBA Status Info Alert Notices: Not Supported 00:27:14.629 EGE Aggregate Log Change Notices: Not Supported 00:27:14.629 Normal NVM Subsystem Shutdown event: Not Supported 00:27:14.629 Zone Descriptor Change Notices: Not Supported 00:27:14.629 Discovery Log Change Notices: Not Supported 00:27:14.629 Controller Attributes 00:27:14.629 128-bit Host Identifier: Supported 00:27:14.629 Non-Operational Permissive Mode: Not Supported 00:27:14.629 NVM Sets: Not Supported 00:27:14.629 Read Recovery Levels: Not Supported 00:27:14.629 Endurance Groups: Not Supported 00:27:14.629 Predictable Latency Mode: Not Supported 00:27:14.629 Traffic Based Keep ALive: Supported 00:27:14.629 Namespace Granularity: Not Supported 00:27:14.629 SQ Associations: Not Supported 00:27:14.629 UUID List: Not Supported 00:27:14.629 Multi-Domain Subsystem: Not Supported 00:27:14.629 Fixed Capacity Management: Not Supported 00:27:14.629 Variable Capacity Management: Not Supported 00:27:14.629 Delete Endurance Group: Not Supported 00:27:14.629 Delete NVM Set: Not Supported 00:27:14.629 Extended LBA Formats Supported: Not Supported 00:27:14.629 Flexible Data Placement Supported: Not Supported 00:27:14.629 00:27:14.629 Controller Memory Buffer Support 00:27:14.629 ================================ 00:27:14.629 Supported: No 00:27:14.629 00:27:14.629 Persistent Memory Region Support 00:27:14.629 ================================ 00:27:14.629 Supported: No 00:27:14.629 00:27:14.629 Admin Command Set Attributes 00:27:14.629 ============================ 00:27:14.629 Security Send/Receive: Not Supported 00:27:14.629 Format NVM: Not Supported 00:27:14.629 Firmware Activate/Download: Not Supported 00:27:14.629 Namespace Management: Not Supported 00:27:14.629 Device Self-Test: Not Supported 00:27:14.629 Directives: Not Supported 00:27:14.629 NVMe-MI: Not Supported 00:27:14.629 Virtualization Management: Not Supported 00:27:14.629 Doorbell Buffer Config: Not Supported 00:27:14.629 Get LBA Status Capability: Not Supported 00:27:14.629 Command & Feature Lockdown Capability: Not Supported 00:27:14.629 Abort Command Limit: 4 00:27:14.629 Async Event Request Limit: 4 00:27:14.629 Number of Firmware Slots: N/A 00:27:14.629 Firmware Slot 1 Read-Only: N/A 00:27:14.629 Firmware Activation Without Reset: N/A 00:27:14.629 Multiple Update Detection Support: N/A 00:27:14.629 Firmware Update Granularity: No Information Provided 00:27:14.629 Per-Namespace SMART Log: Yes 00:27:14.629 Asymmetric Namespace Access Log Page: Supported 00:27:14.629 ANA Transition Time : 10 sec 00:27:14.629 00:27:14.629 Asymmetric Namespace Access Capabilities 00:27:14.629 ANA Optimized State : Supported 00:27:14.629 ANA Non-Optimized State : Supported 00:27:14.629 ANA Inaccessible State : Supported 00:27:14.629 ANA Persistent Loss State : Supported 00:27:14.629 ANA Change State : Supported 00:27:14.629 ANAGRPID is not changed : No 00:27:14.629 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:14.629 00:27:14.629 ANA Group Identifier Maximum : 128 00:27:14.629 Number of ANA Group Identifiers : 128 00:27:14.629 Max Number of Allowed Namespaces : 1024 00:27:14.629 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:14.629 Command Effects Log Page: Supported 00:27:14.629 Get Log Page Extended Data: Supported 00:27:14.629 Telemetry Log Pages: Not Supported 00:27:14.629 Persistent Event Log Pages: Not Supported 00:27:14.629 Supported Log Pages Log Page: May Support 00:27:14.629 Commands Supported & Effects Log Page: Not Supported 00:27:14.629 Feature Identifiers & Effects Log Page:May Support 00:27:14.629 NVMe-MI Commands & Effects Log Page: May Support 00:27:14.629 Data Area 4 for Telemetry Log: Not Supported 00:27:14.629 Error Log Page Entries Supported: 128 00:27:14.629 Keep Alive: Supported 00:27:14.629 Keep Alive Granularity: 1000 ms 00:27:14.629 00:27:14.629 NVM Command Set Attributes 00:27:14.629 ========================== 00:27:14.629 Submission Queue Entry Size 00:27:14.629 Max: 64 00:27:14.629 Min: 64 00:27:14.629 Completion Queue Entry Size 00:27:14.629 Max: 16 00:27:14.629 Min: 16 00:27:14.629 Number of Namespaces: 1024 00:27:14.629 Compare Command: Not Supported 00:27:14.629 Write Uncorrectable Command: Not Supported 00:27:14.629 Dataset Management Command: Supported 00:27:14.629 Write Zeroes Command: Supported 00:27:14.629 Set Features Save Field: Not Supported 00:27:14.629 Reservations: Not Supported 00:27:14.629 Timestamp: Not Supported 00:27:14.629 Copy: Not Supported 00:27:14.629 Volatile Write Cache: Present 00:27:14.629 Atomic Write Unit (Normal): 1 00:27:14.629 Atomic Write Unit (PFail): 1 00:27:14.629 Atomic Compare & Write Unit: 1 00:27:14.629 Fused Compare & Write: Not Supported 00:27:14.629 Scatter-Gather List 00:27:14.629 SGL Command Set: Supported 00:27:14.629 SGL Keyed: Not Supported 00:27:14.629 SGL Bit Bucket Descriptor: Not Supported 00:27:14.629 SGL Metadata Pointer: Not Supported 00:27:14.629 Oversized SGL: Not Supported 00:27:14.629 SGL Metadata Address: Not Supported 00:27:14.629 SGL Offset: Supported 00:27:14.629 Transport SGL Data Block: Not Supported 00:27:14.629 Replay Protected Memory Block: Not Supported 00:27:14.629 00:27:14.629 Firmware Slot Information 00:27:14.629 ========================= 00:27:14.629 Active slot: 0 00:27:14.629 00:27:14.629 Asymmetric Namespace Access 00:27:14.629 =========================== 00:27:14.629 Change Count : 0 00:27:14.629 Number of ANA Group Descriptors : 1 00:27:14.629 ANA Group Descriptor : 0 00:27:14.629 ANA Group ID : 1 00:27:14.629 Number of NSID Values : 1 00:27:14.629 Change Count : 0 00:27:14.629 ANA State : 1 00:27:14.629 Namespace Identifier : 1 00:27:14.629 00:27:14.629 Commands Supported and Effects 00:27:14.629 ============================== 00:27:14.629 Admin Commands 00:27:14.629 -------------- 00:27:14.629 Get Log Page (02h): Supported 00:27:14.629 Identify (06h): Supported 00:27:14.629 Abort (08h): Supported 00:27:14.629 Set Features (09h): Supported 00:27:14.629 Get Features (0Ah): Supported 00:27:14.629 Asynchronous Event Request (0Ch): Supported 00:27:14.629 Keep Alive (18h): Supported 00:27:14.629 I/O Commands 00:27:14.629 ------------ 00:27:14.629 Flush (00h): Supported 00:27:14.629 Write (01h): Supported LBA-Change 00:27:14.629 Read (02h): Supported 00:27:14.629 Write Zeroes (08h): Supported LBA-Change 00:27:14.629 Dataset Management (09h): Supported 00:27:14.629 00:27:14.629 Error Log 00:27:14.629 ========= 00:27:14.629 Entry: 0 00:27:14.629 Error Count: 0x3 00:27:14.629 Submission Queue Id: 0x0 00:27:14.629 Command Id: 0x5 00:27:14.629 Phase Bit: 0 00:27:14.629 Status Code: 0x2 00:27:14.629 Status Code Type: 0x0 00:27:14.629 Do Not Retry: 1 00:27:14.629 Error Location: 0x28 00:27:14.629 LBA: 0x0 00:27:14.629 Namespace: 0x0 00:27:14.629 Vendor Log Page: 0x0 00:27:14.629 ----------- 00:27:14.629 Entry: 1 00:27:14.629 Error Count: 0x2 00:27:14.629 Submission Queue Id: 0x0 00:27:14.629 Command Id: 0x5 00:27:14.629 Phase Bit: 0 00:27:14.629 Status Code: 0x2 00:27:14.629 Status Code Type: 0x0 00:27:14.629 Do Not Retry: 1 00:27:14.629 Error Location: 0x28 00:27:14.629 LBA: 0x0 00:27:14.629 Namespace: 0x0 00:27:14.629 Vendor Log Page: 0x0 00:27:14.629 ----------- 00:27:14.629 Entry: 2 00:27:14.629 Error Count: 0x1 00:27:14.629 Submission Queue Id: 0x0 00:27:14.629 Command Id: 0x4 00:27:14.629 Phase Bit: 0 00:27:14.629 Status Code: 0x2 00:27:14.629 Status Code Type: 0x0 00:27:14.629 Do Not Retry: 1 00:27:14.629 Error Location: 0x28 00:27:14.629 LBA: 0x0 00:27:14.629 Namespace: 0x0 00:27:14.629 Vendor Log Page: 0x0 00:27:14.629 00:27:14.629 Number of Queues 00:27:14.629 ================ 00:27:14.629 Number of I/O Submission Queues: 128 00:27:14.629 Number of I/O Completion Queues: 128 00:27:14.629 00:27:14.629 ZNS Specific Controller Data 00:27:14.629 ============================ 00:27:14.629 Zone Append Size Limit: 0 00:27:14.629 00:27:14.629 00:27:14.629 Active Namespaces 00:27:14.629 ================= 00:27:14.629 get_feature(0x05) failed 00:27:14.629 Namespace ID:1 00:27:14.629 Command Set Identifier: NVM (00h) 00:27:14.629 Deallocate: Supported 00:27:14.629 Deallocated/Unwritten Error: Not Supported 00:27:14.629 Deallocated Read Value: Unknown 00:27:14.629 Deallocate in Write Zeroes: Not Supported 00:27:14.629 Deallocated Guard Field: 0xFFFF 00:27:14.629 Flush: Supported 00:27:14.629 Reservation: Not Supported 00:27:14.629 Namespace Sharing Capabilities: Multiple Controllers 00:27:14.629 Size (in LBAs): 3750748848 (1788GiB) 00:27:14.629 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:14.629 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:14.629 UUID: 1112bcbd-7f0a-4e83-ae41-c19b989b10b7 00:27:14.630 Thin Provisioning: Not Supported 00:27:14.630 Per-NS Atomic Units: Yes 00:27:14.630 Atomic Write Unit (Normal): 8 00:27:14.630 Atomic Write Unit (PFail): 8 00:27:14.630 Preferred Write Granularity: 8 00:27:14.630 Atomic Compare & Write Unit: 8 00:27:14.630 Atomic Boundary Size (Normal): 0 00:27:14.630 Atomic Boundary Size (PFail): 0 00:27:14.630 Atomic Boundary Offset: 0 00:27:14.630 NGUID/EUI64 Never Reused: No 00:27:14.630 ANA group ID: 1 00:27:14.630 Namespace Write Protected: No 00:27:14.630 Number of LBA Formats: 1 00:27:14.630 Current LBA Format: LBA Format #00 00:27:14.630 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:14.630 00:27:14.630 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:14.630 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:14.630 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:14.630 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:14.630 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:14.630 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:14.630 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:14.630 rmmod nvme_tcp 00:27:14.630 rmmod nvme_fabrics 00:27:14.890 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:14.890 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:14.890 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:14.890 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:14.890 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:14.890 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:14.890 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:14.890 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:14.890 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:14.890 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.890 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:14.890 20:22:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.800 20:22:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:16.800 20:22:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:16.800 20:22:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:16.800 20:22:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:16.800 20:22:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:16.800 20:22:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:16.800 20:22:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:16.800 20:22:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:16.800 20:22:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:16.800 20:22:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:16.800 20:22:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:20.144 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:20.144 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:20.144 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:20.144 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:20.404 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:20.404 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:20.404 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:20.404 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:20.404 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:20.404 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:20.404 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:20.404 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:20.404 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:20.404 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:20.404 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:20.404 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:20.404 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:20.974 00:27:20.974 real 0m18.772s 00:27:20.974 user 0m5.092s 00:27:20.974 sys 0m10.672s 00:27:20.974 20:22:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:20.974 20:22:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:20.974 ************************************ 00:27:20.974 END TEST nvmf_identify_kernel_target 00:27:20.974 ************************************ 00:27:20.974 20:22:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:20.974 20:22:18 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:20.974 20:22:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:20.974 20:22:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:20.974 20:22:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:20.974 ************************************ 00:27:20.974 START TEST nvmf_auth_host 00:27:20.974 ************************************ 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:20.974 * Looking for test storage... 00:27:20.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:20.974 20:22:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:27.564 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.564 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:27.565 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:27.565 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:27.565 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:27.565 20:22:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:27.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:27.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:27:27.826 00:27:27.826 --- 10.0.0.2 ping statistics --- 00:27:27.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.826 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:27.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:27.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:27:27.826 00:27:27.826 --- 10.0.0.1 ping statistics --- 00:27:27.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.826 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1142700 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1142700 00:27:27.826 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:27.827 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1142700 ']' 00:27:27.827 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.827 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:27.827 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.827 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:27.827 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.770 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:28.770 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:28.770 20:22:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:28.770 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:28.770 20:22:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1f08f7f1787136e9e45db7119d92ec0a 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.hFg 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1f08f7f1787136e9e45db7119d92ec0a 0 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1f08f7f1787136e9e45db7119d92ec0a 0 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1f08f7f1787136e9e45db7119d92ec0a 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.hFg 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.hFg 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.hFg 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0ae47808461e6cd971a61bf81a467eb8c30a88ca6c1ca99242b9e90de7525b6b 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.geo 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0ae47808461e6cd971a61bf81a467eb8c30a88ca6c1ca99242b9e90de7525b6b 3 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0ae47808461e6cd971a61bf81a467eb8c30a88ca6c1ca99242b9e90de7525b6b 3 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0ae47808461e6cd971a61bf81a467eb8c30a88ca6c1ca99242b9e90de7525b6b 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.geo 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.geo 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.geo 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2446480d9146b7c329faf9e63ae3f4b6b59dba019720e44e 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.uIF 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2446480d9146b7c329faf9e63ae3f4b6b59dba019720e44e 0 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2446480d9146b7c329faf9e63ae3f4b6b59dba019720e44e 0 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2446480d9146b7c329faf9e63ae3f4b6b59dba019720e44e 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:28.770 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.uIF 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.uIF 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.uIF 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=33ef122783fa1ecaca24e21b76b8c83e9893e809eb2092cf 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.W2Y 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 33ef122783fa1ecaca24e21b76b8c83e9893e809eb2092cf 2 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 33ef122783fa1ecaca24e21b76b8c83e9893e809eb2092cf 2 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=33ef122783fa1ecaca24e21b76b8c83e9893e809eb2092cf 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.W2Y 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.W2Y 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.W2Y 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8dc59130e6335fac86c08a1598cab170 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.1oT 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8dc59130e6335fac86c08a1598cab170 1 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8dc59130e6335fac86c08a1598cab170 1 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8dc59130e6335fac86c08a1598cab170 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.1oT 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.1oT 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.1oT 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c111076c2fe7a343a48cd11e3613a4dd 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.jVd 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c111076c2fe7a343a48cd11e3613a4dd 1 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c111076c2fe7a343a48cd11e3613a4dd 1 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c111076c2fe7a343a48cd11e3613a4dd 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.jVd 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.jVd 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.jVd 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8d47f001ae5902530329ad35748e10987ed1cadaf6adfd7f 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.fHN 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8d47f001ae5902530329ad35748e10987ed1cadaf6adfd7f 2 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8d47f001ae5902530329ad35748e10987ed1cadaf6adfd7f 2 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:29.033 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:29.034 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8d47f001ae5902530329ad35748e10987ed1cadaf6adfd7f 00:27:29.034 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:29.034 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:29.034 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.fHN 00:27:29.034 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.fHN 00:27:29.034 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.fHN 00:27:29.034 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:29.034 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:29.034 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:29.034 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:29.034 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6e927000bebd98f80cdf0d09012da73e 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.AUB 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6e927000bebd98f80cdf0d09012da73e 0 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6e927000bebd98f80cdf0d09012da73e 0 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6e927000bebd98f80cdf0d09012da73e 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.AUB 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.AUB 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.AUB 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=613181496a333a1ac4c9200fe8a69e07a29501ff3560c93757208f4718fb9b4d 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6MK 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 613181496a333a1ac4c9200fe8a69e07a29501ff3560c93757208f4718fb9b4d 3 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 613181496a333a1ac4c9200fe8a69e07a29501ff3560c93757208f4718fb9b4d 3 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=613181496a333a1ac4c9200fe8a69e07a29501ff3560c93757208f4718fb9b4d 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6MK 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6MK 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.6MK 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1142700 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1142700 ']' 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:29.295 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hFg 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.geo ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.geo 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.uIF 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.W2Y ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.W2Y 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.1oT 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.jVd ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jVd 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.fHN 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.AUB ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.AUB 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.6MK 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:29.556 20:22:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:32.856 Waiting for block devices as requested 00:27:32.856 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:32.856 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:32.856 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:32.856 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:33.116 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:33.116 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:33.116 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:33.398 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:33.398 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:33.659 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:33.659 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:33.659 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:33.919 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:33.919 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:33.919 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:33.919 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:34.179 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:35.122 No valid GPT data, bailing 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:35.122 00:27:35.122 Discovery Log Number of Records 2, Generation counter 2 00:27:35.122 =====Discovery Log Entry 0====== 00:27:35.122 trtype: tcp 00:27:35.122 adrfam: ipv4 00:27:35.122 subtype: current discovery subsystem 00:27:35.122 treq: not specified, sq flow control disable supported 00:27:35.122 portid: 1 00:27:35.122 trsvcid: 4420 00:27:35.122 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:35.122 traddr: 10.0.0.1 00:27:35.122 eflags: none 00:27:35.122 sectype: none 00:27:35.122 =====Discovery Log Entry 1====== 00:27:35.122 trtype: tcp 00:27:35.122 adrfam: ipv4 00:27:35.122 subtype: nvme subsystem 00:27:35.122 treq: not specified, sq flow control disable supported 00:27:35.122 portid: 1 00:27:35.122 trsvcid: 4420 00:27:35.122 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:35.122 traddr: 10.0.0.1 00:27:35.122 eflags: none 00:27:35.122 sectype: none 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: ]] 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.122 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.383 nvme0n1 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: ]] 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.383 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.644 nvme0n1 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: ]] 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.644 20:22:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.906 nvme0n1 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: ]] 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.906 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.167 nvme0n1 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: ]] 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.167 nvme0n1 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.167 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.168 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.168 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.168 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.428 nvme0n1 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.428 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: ]] 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.689 20:22:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.689 nvme0n1 00:27:36.689 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.689 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.689 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.689 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.689 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.689 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: ]] 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.950 nvme0n1 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.950 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: ]] 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.212 nvme0n1 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.212 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: ]] 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.473 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.734 nvme0n1 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.734 20:22:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.027 nvme0n1 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: ]] 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.027 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.301 nvme0n1 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: ]] 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.301 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.561 nvme0n1 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: ]] 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.561 20:22:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.820 nvme0n1 00:27:38.821 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.821 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.821 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.821 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.821 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.821 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: ]] 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.081 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.341 nvme0n1 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.341 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.602 nvme0n1 00:27:39.602 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.602 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.602 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.602 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.602 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.602 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.602 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.602 20:22:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.602 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.602 20:22:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: ]] 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.602 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.862 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.862 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.862 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.862 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.862 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.862 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.862 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.862 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.863 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.863 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:39.863 20:22:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.863 20:22:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.123 nvme0n1 00:27:40.123 20:22:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.123 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.123 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.123 20:22:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.123 20:22:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.123 20:22:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.123 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.123 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.123 20:22:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.123 20:22:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: ]] 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.383 20:22:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.384 20:22:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.384 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.384 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.384 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.384 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.384 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.384 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.384 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.384 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.384 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.384 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.384 20:22:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.384 20:22:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.384 20:22:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.384 20:22:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.645 nvme0n1 00:27:40.645 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.645 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.645 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.645 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.645 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.645 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: ]] 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.906 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.478 nvme0n1 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: ]] 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.478 20:22:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.739 nvme0n1 00:27:41.739 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.739 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.739 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.739 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.739 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.739 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.000 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.573 nvme0n1 00:27:42.573 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.573 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.573 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.573 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.573 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.573 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.573 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.573 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.573 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.573 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.573 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.573 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.573 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.573 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:42.573 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.573 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.573 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:42.573 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: ]] 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.574 20:22:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.146 nvme0n1 00:27:43.146 20:22:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.146 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.146 20:22:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.146 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.146 20:22:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.146 20:22:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.406 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: ]] 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.407 20:22:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.979 nvme0n1 00:27:43.979 20:22:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.979 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.979 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.979 20:22:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.979 20:22:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.979 20:22:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: ]] 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.259 20:22:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.832 nvme0n1 00:27:44.832 20:22:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.832 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.832 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.832 20:22:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.832 20:22:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.832 20:22:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.832 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.832 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.832 20:22:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.832 20:22:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: ]] 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.094 20:22:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.665 nvme0n1 00:27:45.665 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.665 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.665 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.665 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.665 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.665 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.665 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.665 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.665 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.665 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.926 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.497 nvme0n1 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.497 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: ]] 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.758 20:22:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.758 nvme0n1 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: ]] 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:46.758 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.759 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.019 nvme0n1 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: ]] 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.019 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.280 nvme0n1 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.280 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: ]] 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.281 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.542 nvme0n1 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.542 20:22:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.803 nvme0n1 00:27:47.803 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.803 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.803 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.803 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.803 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.803 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.803 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.803 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.803 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.803 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.803 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.803 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:47.803 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.803 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: ]] 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.804 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.065 nvme0n1 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: ]] 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.065 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.327 nvme0n1 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: ]] 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.327 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.588 nvme0n1 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.588 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: ]] 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.589 20:22:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.850 nvme0n1 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.850 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.111 nvme0n1 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.111 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: ]] 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.112 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.373 nvme0n1 00:27:49.373 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.373 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.373 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.373 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.373 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.373 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: ]] 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.634 20:22:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.895 nvme0n1 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:49.895 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: ]] 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.896 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.157 nvme0n1 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: ]] 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.157 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.728 nvme0n1 00:27:50.728 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.728 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.728 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.728 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.728 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.728 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.728 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.728 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.728 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.728 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.728 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.728 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.729 20:22:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.989 nvme0n1 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: ]] 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.989 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.990 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.990 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.990 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.990 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.990 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.990 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.990 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:50.990 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.990 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.562 nvme0n1 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: ]] 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.562 20:22:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.563 20:22:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:51.563 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.563 20:22:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.145 nvme0n1 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: ]] 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.145 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.715 nvme0n1 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: ]] 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.715 20:22:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.287 nvme0n1 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.287 20:22:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.547 nvme0n1 00:27:53.547 20:22:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.547 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.547 20:22:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.547 20:22:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.547 20:22:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.547 20:22:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: ]] 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.806 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.807 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.807 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.807 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.807 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.807 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.807 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.807 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.807 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.807 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.807 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.807 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.807 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.807 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.807 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.807 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.378 nvme0n1 00:27:54.378 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.378 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.378 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.378 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.378 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: ]] 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.638 20:22:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.209 nvme0n1 00:27:55.209 20:22:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.209 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.209 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.209 20:22:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.209 20:22:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: ]] 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.471 20:22:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.042 nvme0n1 00:27:56.042 20:22:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.042 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.042 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.042 20:22:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.042 20:22:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: ]] 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.303 20:22:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.876 nvme0n1 00:27:56.876 20:22:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.876 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.876 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.876 20:22:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.876 20:22:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.137 20:22:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.708 nvme0n1 00:27:57.708 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.708 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.708 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.708 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.708 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: ]] 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.969 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.970 nvme0n1 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.970 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.230 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.230 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.230 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.230 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.230 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.230 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: ]] 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.231 nvme0n1 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.231 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: ]] 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.492 nvme0n1 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: ]] 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.492 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.753 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.753 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.753 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.753 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.753 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.753 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.753 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.753 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.753 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.753 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.753 20:22:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.753 20:22:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:58.753 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.753 20:22:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.753 nvme0n1 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:58.753 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.754 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.015 nvme0n1 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: ]] 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.015 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.276 nvme0n1 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: ]] 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.276 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.537 nvme0n1 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: ]] 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.537 20:22:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.798 nvme0n1 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: ]] 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.798 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.059 nvme0n1 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.059 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.060 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:00.060 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:00.060 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:28:00.060 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:00.060 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.321 nvme0n1 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.321 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: ]] 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.583 20:22:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.844 nvme0n1 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: ]] 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:00.844 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.845 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:00.845 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.845 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.845 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.845 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.845 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.845 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.845 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.845 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.845 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.845 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.845 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.845 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.845 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.845 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.845 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:00.845 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.845 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.107 nvme0n1 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: ]] 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.107 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.368 nvme0n1 00:28:01.368 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.368 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.368 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.368 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.368 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.368 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: ]] 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.629 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.630 20:22:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.891 nvme0n1 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.891 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.153 nvme0n1 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: ]] 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.153 20:22:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.154 20:22:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:02.154 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.154 20:22:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.726 nvme0n1 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: ]] 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:02.726 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.727 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.298 nvme0n1 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: ]] 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.298 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.299 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.299 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.299 20:23:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.299 20:23:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:03.299 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.299 20:23:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.871 nvme0n1 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: ]] 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.871 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.444 nvme0n1 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.444 20:23:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.015 nvme0n1 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWYwOGY3ZjE3ODcxMzZlOWU0NWRiNzExOWQ5MmVjMGGOS5Lq: 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: ]] 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGFlNDc4MDg0NjFlNmNkOTcxYTYxYmY4MWE0NjdlYjhjMzBhODhjYTZjMWNhOTkyNDJiOWU5MGRlNzUyNWI2YgzKvXg=: 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:05.015 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.016 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:05.016 20:23:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.016 20:23:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.016 20:23:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.016 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.016 20:23:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.016 20:23:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.016 20:23:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.016 20:23:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.016 20:23:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.016 20:23:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.016 20:23:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.016 20:23:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.016 20:23:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.016 20:23:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.016 20:23:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:05.016 20:23:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.016 20:23:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.957 nvme0n1 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: ]] 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.957 20:23:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.958 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:05.958 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.958 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.574 nvme0n1 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRjNTkxMzBlNjMzNWZhYzg2YzA4YTE1OThjYWIxNzBMGEoZ: 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: ]] 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExMTA3NmMyZmU3YTM0M2E0OGNkMTFlMzYxM2E0ZGTE8US0: 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.574 20:23:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.574 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.574 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.574 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.574 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.574 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.574 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.574 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.574 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.574 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.574 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.835 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:06.835 20:23:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.835 20:23:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.407 nvme0n1 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ0N2YwMDFhZTU5MDI1MzAzMjlhZDM1NzQ4ZTEwOTg3ZWQxY2FkYWY2YWRmZDdma2U5Ew==: 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: ]] 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5MjcwMDBiZWJkOThmODBjZGYwZDA5MDEyZGE3M2Vm//SF: 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.407 20:23:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.351 nvme0n1 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEzMTgxNDk2YTMzM2ExYWM0YzkyMDBmZThhNjllMDdhMjk1MDFmZjM1NjBjOTM3NTcyMDhmNDcxOGZiOWI0ZDvMkMc=: 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.351 20:23:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.294 nvme0n1 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQ0NjQ4MGQ5MTQ2YjdjMzI5ZmFmOWU2M2FlM2Y0YjZiNTlkYmEwMTk3MjBlNDRlKwkNyQ==: 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: ]] 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNlZjEyMjc4M2ZhMWVjYWNhMjRlMjFiNzZiOGM4M2U5ODkzZTgwOWViMjA5MmNmugqPnQ==: 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.294 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.294 request: 00:28:09.294 { 00:28:09.294 "name": "nvme0", 00:28:09.294 "trtype": "tcp", 00:28:09.294 "traddr": "10.0.0.1", 00:28:09.294 "adrfam": "ipv4", 00:28:09.294 "trsvcid": "4420", 00:28:09.294 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:09.294 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:09.294 "prchk_reftag": false, 00:28:09.294 "prchk_guard": false, 00:28:09.294 "hdgst": false, 00:28:09.294 "ddgst": false, 00:28:09.294 "method": "bdev_nvme_attach_controller", 00:28:09.294 "req_id": 1 00:28:09.294 } 00:28:09.294 Got JSON-RPC error response 00:28:09.294 response: 00:28:09.294 { 00:28:09.294 "code": -5, 00:28:09.295 "message": "Input/output error" 00:28:09.295 } 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.295 request: 00:28:09.295 { 00:28:09.295 "name": "nvme0", 00:28:09.295 "trtype": "tcp", 00:28:09.295 "traddr": "10.0.0.1", 00:28:09.295 "adrfam": "ipv4", 00:28:09.295 "trsvcid": "4420", 00:28:09.295 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:09.295 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:09.295 "prchk_reftag": false, 00:28:09.295 "prchk_guard": false, 00:28:09.295 "hdgst": false, 00:28:09.295 "ddgst": false, 00:28:09.295 "dhchap_key": "key2", 00:28:09.295 "method": "bdev_nvme_attach_controller", 00:28:09.295 "req_id": 1 00:28:09.295 } 00:28:09.295 Got JSON-RPC error response 00:28:09.295 response: 00:28:09.295 { 00:28:09.295 "code": -5, 00:28:09.295 "message": "Input/output error" 00:28:09.295 } 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.295 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.557 request: 00:28:09.557 { 00:28:09.557 "name": "nvme0", 00:28:09.557 "trtype": "tcp", 00:28:09.557 "traddr": "10.0.0.1", 00:28:09.557 "adrfam": "ipv4", 00:28:09.557 "trsvcid": "4420", 00:28:09.557 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:09.557 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:09.557 "prchk_reftag": false, 00:28:09.557 "prchk_guard": false, 00:28:09.557 "hdgst": false, 00:28:09.557 "ddgst": false, 00:28:09.557 "dhchap_key": "key1", 00:28:09.557 "dhchap_ctrlr_key": "ckey2", 00:28:09.557 "method": "bdev_nvme_attach_controller", 00:28:09.557 "req_id": 1 00:28:09.557 } 00:28:09.557 Got JSON-RPC error response 00:28:09.557 response: 00:28:09.557 { 00:28:09.557 "code": -5, 00:28:09.557 "message": "Input/output error" 00:28:09.557 } 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:09.557 rmmod nvme_tcp 00:28:09.557 rmmod nvme_fabrics 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1142700 ']' 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1142700 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1142700 ']' 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1142700 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1142700 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1142700' 00:28:09.557 killing process with pid 1142700 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1142700 00:28:09.557 20:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1142700 00:28:09.818 20:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:09.818 20:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:09.818 20:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:09.818 20:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:09.818 20:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:09.818 20:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.818 20:23:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:09.818 20:23:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.733 20:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:11.733 20:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:11.733 20:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:11.733 20:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:11.733 20:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:11.733 20:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:11.733 20:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:11.733 20:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:11.733 20:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:11.733 20:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:11.733 20:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:11.733 20:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:11.994 20:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:15.298 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:15.298 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:15.298 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:15.298 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:15.298 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:15.298 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:15.298 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:15.298 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:15.298 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:15.298 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:15.298 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:15.298 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:15.298 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:15.298 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:15.298 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:15.298 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:15.298 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:15.868 20:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.hFg /tmp/spdk.key-null.uIF /tmp/spdk.key-sha256.1oT /tmp/spdk.key-sha384.fHN /tmp/spdk.key-sha512.6MK /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:15.868 20:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:19.170 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:19.170 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:19.170 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:19.170 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:19.170 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:19.170 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:19.170 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:19.170 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:19.170 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:19.170 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:19.170 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:19.170 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:19.170 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:19.170 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:19.170 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:19.170 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:19.170 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:19.170 00:28:19.170 real 0m58.404s 00:28:19.170 user 0m52.397s 00:28:19.170 sys 0m14.858s 00:28:19.170 20:23:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:19.170 20:23:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.170 ************************************ 00:28:19.170 END TEST nvmf_auth_host 00:28:19.170 ************************************ 00:28:19.432 20:23:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:19.432 20:23:16 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:28:19.432 20:23:16 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:19.432 20:23:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:19.432 20:23:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:19.432 20:23:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.432 ************************************ 00:28:19.432 START TEST nvmf_digest 00:28:19.432 ************************************ 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:19.432 * Looking for test storage... 00:28:19.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:19.432 20:23:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:27.572 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:27.572 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:27.572 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:27.572 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:27.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:28:27.572 00:28:27.572 --- 10.0.0.2 ping statistics --- 00:28:27.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.572 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:27.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.370 ms 00:28:27.572 00:28:27.572 --- 10.0.0.1 ping statistics --- 00:28:27.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.572 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:27.572 ************************************ 00:28:27.572 START TEST nvmf_digest_clean 00:28:27.572 ************************************ 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1159329 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1159329 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1159329 ']' 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:27.572 20:23:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:27.572 [2024-07-15 20:23:23.922118] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:28:27.572 [2024-07-15 20:23:23.922179] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.572 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.572 [2024-07-15 20:23:23.990270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.572 [2024-07-15 20:23:24.053744] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.572 [2024-07-15 20:23:24.053778] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.572 [2024-07-15 20:23:24.053785] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.572 [2024-07-15 20:23:24.053792] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.572 [2024-07-15 20:23:24.053797] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.572 [2024-07-15 20:23:24.053817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.572 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:27.572 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:27.572 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:27.572 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:27.572 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:27.572 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:27.573 null0 00:28:27.573 [2024-07-15 20:23:24.796443] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.573 [2024-07-15 20:23:24.820610] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1159395 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1159395 /var/tmp/bperf.sock 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1159395 ']' 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:27.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:27.573 20:23:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:27.573 [2024-07-15 20:23:24.875105] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:28:27.573 [2024-07-15 20:23:24.875157] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159395 ] 00:28:27.573 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.573 [2024-07-15 20:23:24.949623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.832 [2024-07-15 20:23:25.013825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.404 20:23:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:28.404 20:23:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:28.404 20:23:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:28.404 20:23:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:28.404 20:23:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:28.664 20:23:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.664 20:23:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.664 nvme0n1 00:28:28.664 20:23:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:28.664 20:23:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:28.924 Running I/O for 2 seconds... 00:28:30.899 00:28:30.899 Latency(us) 00:28:30.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.899 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:30.899 nvme0n1 : 2.05 14617.54 57.10 0.00 0.00 8572.22 3604.48 45219.84 00:28:30.899 =================================================================================================================== 00:28:30.899 Total : 14617.54 57.10 0.00 0.00 8572.22 3604.48 45219.84 00:28:30.899 0 00:28:30.899 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:30.899 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:30.899 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:30.899 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:30.899 | select(.opcode=="crc32c") 00:28:30.899 | "\(.module_name) \(.executed)"' 00:28:30.899 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1159395 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1159395 ']' 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1159395 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1159395 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1159395' 00:28:31.159 killing process with pid 1159395 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1159395 00:28:31.159 Received shutdown signal, test time was about 2.000000 seconds 00:28:31.159 00:28:31.159 Latency(us) 00:28:31.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.159 =================================================================================================================== 00:28:31.159 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1159395 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1160173 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1160173 /var/tmp/bperf.sock 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1160173 ']' 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:31.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:31.159 20:23:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:31.420 [2024-07-15 20:23:28.619699] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:28:31.420 [2024-07-15 20:23:28.619754] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160173 ] 00:28:31.420 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:31.420 Zero copy mechanism will not be used. 00:28:31.420 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.420 [2024-07-15 20:23:28.694778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.420 [2024-07-15 20:23:28.759170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.990 20:23:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:31.990 20:23:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:31.990 20:23:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:31.990 20:23:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:31.990 20:23:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:32.251 20:23:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.251 20:23:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.822 nvme0n1 00:28:32.822 20:23:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:32.822 20:23:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:32.822 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:32.822 Zero copy mechanism will not be used. 00:28:32.822 Running I/O for 2 seconds... 00:28:34.734 00:28:34.734 Latency(us) 00:28:34.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.734 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:34.734 nvme0n1 : 2.00 2108.52 263.56 0.00 0.00 7585.51 4396.37 16820.91 00:28:34.734 =================================================================================================================== 00:28:34.734 Total : 2108.52 263.56 0.00 0.00 7585.51 4396.37 16820.91 00:28:34.734 0 00:28:34.734 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:34.734 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:34.734 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:34.734 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:34.734 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:34.734 | select(.opcode=="crc32c") 00:28:34.734 | "\(.module_name) \(.executed)"' 00:28:34.996 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:34.996 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:34.996 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:34.996 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:34.996 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1160173 00:28:34.996 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1160173 ']' 00:28:34.996 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1160173 00:28:34.996 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:34.996 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:34.996 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1160173 00:28:34.996 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:34.996 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:34.996 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1160173' 00:28:34.996 killing process with pid 1160173 00:28:34.996 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1160173 00:28:34.996 Received shutdown signal, test time was about 2.000000 seconds 00:28:34.996 00:28:34.996 Latency(us) 00:28:34.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.996 =================================================================================================================== 00:28:34.996 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:34.996 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1160173 00:28:35.257 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:35.257 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:35.257 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:35.257 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:35.257 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:35.257 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:35.257 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:35.257 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1161017 00:28:35.257 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1161017 /var/tmp/bperf.sock 00:28:35.257 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1161017 ']' 00:28:35.257 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:35.257 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:35.257 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:35.257 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:35.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:35.257 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:35.257 20:23:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:35.257 [2024-07-15 20:23:32.482274] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:28:35.257 [2024-07-15 20:23:32.482329] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161017 ] 00:28:35.257 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.257 [2024-07-15 20:23:32.556558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.257 [2024-07-15 20:23:32.610058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.828 20:23:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:35.828 20:23:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:35.828 20:23:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:35.828 20:23:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:35.828 20:23:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:36.087 20:23:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.087 20:23:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.659 nvme0n1 00:28:36.659 20:23:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:36.659 20:23:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:36.659 Running I/O for 2 seconds... 00:28:38.575 00:28:38.575 Latency(us) 00:28:38.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.575 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:38.575 nvme0n1 : 2.01 21379.62 83.51 0.00 0.00 5979.44 4560.21 14636.37 00:28:38.575 =================================================================================================================== 00:28:38.575 Total : 21379.62 83.51 0.00 0.00 5979.44 4560.21 14636.37 00:28:38.575 0 00:28:38.575 20:23:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:38.575 20:23:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:38.575 20:23:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:38.575 20:23:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:38.575 | select(.opcode=="crc32c") 00:28:38.575 | "\(.module_name) \(.executed)"' 00:28:38.575 20:23:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:38.836 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:38.836 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:38.836 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:38.836 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:38.836 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1161017 00:28:38.836 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1161017 ']' 00:28:38.836 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1161017 00:28:38.836 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:38.836 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:38.836 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1161017 00:28:38.836 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:38.836 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:38.836 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1161017' 00:28:38.836 killing process with pid 1161017 00:28:38.836 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1161017 00:28:38.836 Received shutdown signal, test time was about 2.000000 seconds 00:28:38.836 00:28:38.836 Latency(us) 00:28:38.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.836 =================================================================================================================== 00:28:38.836 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:38.836 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1161017 00:28:39.097 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:39.097 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:39.097 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:39.097 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:39.097 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:39.097 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:39.097 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:39.097 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1161729 00:28:39.097 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1161729 /var/tmp/bperf.sock 00:28:39.097 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1161729 ']' 00:28:39.097 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:39.097 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:39.097 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:39.097 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:39.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:39.098 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:39.098 20:23:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:39.098 [2024-07-15 20:23:36.335777] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:28:39.098 [2024-07-15 20:23:36.335834] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161729 ] 00:28:39.098 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:39.098 Zero copy mechanism will not be used. 00:28:39.098 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.098 [2024-07-15 20:23:36.411738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.098 [2024-07-15 20:23:36.465433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.040 20:23:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:40.040 20:23:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:40.040 20:23:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:40.041 20:23:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:40.041 20:23:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:40.041 20:23:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.041 20:23:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.301 nvme0n1 00:28:40.302 20:23:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:40.302 20:23:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:40.302 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:40.302 Zero copy mechanism will not be used. 00:28:40.302 Running I/O for 2 seconds... 00:28:42.849 00:28:42.849 Latency(us) 00:28:42.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.849 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:42.849 nvme0n1 : 2.00 2895.36 361.92 0.00 0.00 5517.49 3822.93 20753.07 00:28:42.849 =================================================================================================================== 00:28:42.849 Total : 2895.36 361.92 0.00 0.00 5517.49 3822.93 20753.07 00:28:42.849 0 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:42.849 | select(.opcode=="crc32c") 00:28:42.849 | "\(.module_name) \(.executed)"' 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1161729 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1161729 ']' 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1161729 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1161729 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1161729' 00:28:42.849 killing process with pid 1161729 00:28:42.849 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1161729 00:28:42.849 Received shutdown signal, test time was about 2.000000 seconds 00:28:42.849 00:28:42.849 Latency(us) 00:28:42.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.849 =================================================================================================================== 00:28:42.849 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:42.850 20:23:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1161729 00:28:42.850 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1159329 00:28:42.850 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1159329 ']' 00:28:42.850 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1159329 00:28:42.850 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:42.850 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:42.850 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1159329 00:28:42.850 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:42.850 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:42.850 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1159329' 00:28:42.850 killing process with pid 1159329 00:28:42.850 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1159329 00:28:42.850 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1159329 00:28:42.850 00:28:42.850 real 0m16.371s 00:28:42.850 user 0m31.761s 00:28:42.850 sys 0m3.252s 00:28:42.850 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:42.850 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:42.850 ************************************ 00:28:42.850 END TEST nvmf_digest_clean 00:28:42.850 ************************************ 00:28:42.850 20:23:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:42.850 20:23:40 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:42.850 20:23:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:42.850 20:23:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:42.850 20:23:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:43.111 ************************************ 00:28:43.111 START TEST nvmf_digest_error 00:28:43.111 ************************************ 00:28:43.111 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:28:43.111 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:43.111 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:43.111 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:43.111 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.111 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1162440 00:28:43.111 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1162440 00:28:43.111 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:43.111 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1162440 ']' 00:28:43.111 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.111 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:43.111 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.111 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:43.111 20:23:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.111 [2024-07-15 20:23:40.358744] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:28:43.111 [2024-07-15 20:23:40.358793] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.111 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.111 [2024-07-15 20:23:40.422090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.111 [2024-07-15 20:23:40.485844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.111 [2024-07-15 20:23:40.485878] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.111 [2024-07-15 20:23:40.485886] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.111 [2024-07-15 20:23:40.485892] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.111 [2024-07-15 20:23:40.485897] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.111 [2024-07-15 20:23:40.485916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.055 [2024-07-15 20:23:41.163863] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.055 null0 00:28:44.055 [2024-07-15 20:23:41.244643] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.055 [2024-07-15 20:23:41.268813] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1162782 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1162782 /var/tmp/bperf.sock 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1162782 ']' 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:44.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:44.055 20:23:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.055 [2024-07-15 20:23:41.334612] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:28:44.055 [2024-07-15 20:23:41.334674] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162782 ] 00:28:44.055 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.055 [2024-07-15 20:23:41.409682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.055 [2024-07-15 20:23:41.463816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.997 20:23:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:44.997 20:23:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:44.997 20:23:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.997 20:23:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.997 20:23:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:44.997 20:23:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.997 20:23:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.997 20:23:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.997 20:23:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.997 20:23:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:45.258 nvme0n1 00:28:45.258 20:23:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:45.258 20:23:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.258 20:23:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:45.258 20:23:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.258 20:23:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:45.258 20:23:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:45.258 Running I/O for 2 seconds... 00:28:45.258 [2024-07-15 20:23:42.617281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.258 [2024-07-15 20:23:42.617311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.258 [2024-07-15 20:23:42.617319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.258 [2024-07-15 20:23:42.630577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.258 [2024-07-15 20:23:42.630597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.258 [2024-07-15 20:23:42.630609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.258 [2024-07-15 20:23:42.643339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.258 [2024-07-15 20:23:42.643358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.258 [2024-07-15 20:23:42.643365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.258 [2024-07-15 20:23:42.655326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.258 [2024-07-15 20:23:42.655345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.258 [2024-07-15 20:23:42.655351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.258 [2024-07-15 20:23:42.670020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.258 [2024-07-15 20:23:42.670039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.258 [2024-07-15 20:23:42.670045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.258 [2024-07-15 20:23:42.681254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.258 [2024-07-15 20:23:42.681272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.258 [2024-07-15 20:23:42.681278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.694571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.694589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.694595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.706877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.706894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.706901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.720371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.720389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.720395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.732485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.732502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.732509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.744948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.744970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.744976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.757509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.757526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.757532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.770175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.770193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.770199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.782750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.782767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.782773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.795945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.795963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.795969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.807552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.807569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.807575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.821944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.821962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.821968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.834361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.834379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.834385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.846791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.846809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.846816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.858259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.858276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.858283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.871332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.871349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.871356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.884053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.884071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.884077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.896605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.896622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.896628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.910616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.910634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.910640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.520 [2024-07-15 20:23:42.921687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.520 [2024-07-15 20:23:42.921704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.520 [2024-07-15 20:23:42.921710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.521 [2024-07-15 20:23:42.935323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.521 [2024-07-15 20:23:42.935341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.521 [2024-07-15 20:23:42.935347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.521 [2024-07-15 20:23:42.947967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.521 [2024-07-15 20:23:42.947985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.521 [2024-07-15 20:23:42.947991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.782 [2024-07-15 20:23:42.959793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.782 [2024-07-15 20:23:42.959811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.782 [2024-07-15 20:23:42.959820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.782 [2024-07-15 20:23:42.972904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.782 [2024-07-15 20:23:42.972921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.782 [2024-07-15 20:23:42.972927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.782 [2024-07-15 20:23:42.985852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.782 [2024-07-15 20:23:42.985869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.782 [2024-07-15 20:23:42.985876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.782 [2024-07-15 20:23:42.999000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.782 [2024-07-15 20:23:42.999017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.782 [2024-07-15 20:23:42.999023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.782 [2024-07-15 20:23:43.011608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.782 [2024-07-15 20:23:43.011626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.782 [2024-07-15 20:23:43.011632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.782 [2024-07-15 20:23:43.024342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.782 [2024-07-15 20:23:43.024360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.782 [2024-07-15 20:23:43.024366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.782 [2024-07-15 20:23:43.036222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.782 [2024-07-15 20:23:43.036239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.782 [2024-07-15 20:23:43.036245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.782 [2024-07-15 20:23:43.049897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.782 [2024-07-15 20:23:43.049914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.782 [2024-07-15 20:23:43.049920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.782 [2024-07-15 20:23:43.060810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.782 [2024-07-15 20:23:43.060827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.782 [2024-07-15 20:23:43.060833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.782 [2024-07-15 20:23:43.073605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.782 [2024-07-15 20:23:43.073622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.782 [2024-07-15 20:23:43.073628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.782 [2024-07-15 20:23:43.086999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.782 [2024-07-15 20:23:43.087016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.782 [2024-07-15 20:23:43.087022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.782 [2024-07-15 20:23:43.099570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.782 [2024-07-15 20:23:43.099587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.782 [2024-07-15 20:23:43.099592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.782 [2024-07-15 20:23:43.112220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.782 [2024-07-15 20:23:43.112237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.782 [2024-07-15 20:23:43.112243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.782 [2024-07-15 20:23:43.124597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.782 [2024-07-15 20:23:43.124614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.782 [2024-07-15 20:23:43.124620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.782 [2024-07-15 20:23:43.137071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.782 [2024-07-15 20:23:43.137087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.782 [2024-07-15 20:23:43.137093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.782 [2024-07-15 20:23:43.149743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.782 [2024-07-15 20:23:43.149759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.782 [2024-07-15 20:23:43.149765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.783 [2024-07-15 20:23:43.162839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.783 [2024-07-15 20:23:43.162856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.783 [2024-07-15 20:23:43.162862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.783 [2024-07-15 20:23:43.175273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.783 [2024-07-15 20:23:43.175289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.783 [2024-07-15 20:23:43.175299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.783 [2024-07-15 20:23:43.187498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.783 [2024-07-15 20:23:43.187515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.783 [2024-07-15 20:23:43.187521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.783 [2024-07-15 20:23:43.199150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.783 [2024-07-15 20:23:43.199166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.783 [2024-07-15 20:23:43.199172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.783 [2024-07-15 20:23:43.213004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:45.783 [2024-07-15 20:23:43.213021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.783 [2024-07-15 20:23:43.213027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.224836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.224854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.224860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.237284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.237302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.237308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.250565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.250582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.250588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.263450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.263465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.263472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.276065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.276081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.276088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.288450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.288471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.288477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.300252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.300269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.300275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.313746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.313763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.313769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.326526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.326544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.326550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.338748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.338765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.338771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.350910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.350926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.350932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.364202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.364219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.364225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.377228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.377245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.377251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.389649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.389665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.389671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.402788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.402805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.402811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.413070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.413087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.413093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.427165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.427182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.427188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.440085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.440102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.440108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.452046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.452063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.452070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.047 [2024-07-15 20:23:43.465573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.047 [2024-07-15 20:23:43.465589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.047 [2024-07-15 20:23:43.465595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.312 [2024-07-15 20:23:43.477654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.312 [2024-07-15 20:23:43.477671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.312 [2024-07-15 20:23:43.477677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.312 [2024-07-15 20:23:43.490225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.312 [2024-07-15 20:23:43.490242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.312 [2024-07-15 20:23:43.490248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.312 [2024-07-15 20:23:43.502675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.312 [2024-07-15 20:23:43.502691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.312 [2024-07-15 20:23:43.502701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.312 [2024-07-15 20:23:43.515069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.312 [2024-07-15 20:23:43.515088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.312 [2024-07-15 20:23:43.515095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.312 [2024-07-15 20:23:43.528024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.312 [2024-07-15 20:23:43.528041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.312 [2024-07-15 20:23:43.528047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.312 [2024-07-15 20:23:43.541303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.312 [2024-07-15 20:23:43.541320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.312 [2024-07-15 20:23:43.541326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.312 [2024-07-15 20:23:43.552600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.312 [2024-07-15 20:23:43.552617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.312 [2024-07-15 20:23:43.552623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.312 [2024-07-15 20:23:43.566000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.312 [2024-07-15 20:23:43.566016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.312 [2024-07-15 20:23:43.566022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.312 [2024-07-15 20:23:43.578483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.312 [2024-07-15 20:23:43.578500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.313 [2024-07-15 20:23:43.578506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.313 [2024-07-15 20:23:43.590904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.313 [2024-07-15 20:23:43.590921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.313 [2024-07-15 20:23:43.590927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.313 [2024-07-15 20:23:43.603608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.313 [2024-07-15 20:23:43.603625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.313 [2024-07-15 20:23:43.603631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.313 [2024-07-15 20:23:43.616179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.313 [2024-07-15 20:23:43.616200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.313 [2024-07-15 20:23:43.616206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.313 [2024-07-15 20:23:43.628539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.313 [2024-07-15 20:23:43.628556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.313 [2024-07-15 20:23:43.628563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.313 [2024-07-15 20:23:43.641065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.313 [2024-07-15 20:23:43.641081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.313 [2024-07-15 20:23:43.641087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.313 [2024-07-15 20:23:43.653749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.313 [2024-07-15 20:23:43.653765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.313 [2024-07-15 20:23:43.653771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.313 [2024-07-15 20:23:43.666227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.313 [2024-07-15 20:23:43.666244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.313 [2024-07-15 20:23:43.666250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.313 [2024-07-15 20:23:43.679429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.313 [2024-07-15 20:23:43.679446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.313 [2024-07-15 20:23:43.679452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.313 [2024-07-15 20:23:43.692460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.313 [2024-07-15 20:23:43.692476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.313 [2024-07-15 20:23:43.692482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.313 [2024-07-15 20:23:43.704328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.313 [2024-07-15 20:23:43.704345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.313 [2024-07-15 20:23:43.704351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.313 [2024-07-15 20:23:43.716865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.313 [2024-07-15 20:23:43.716882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.313 [2024-07-15 20:23:43.716889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.313 [2024-07-15 20:23:43.729319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.313 [2024-07-15 20:23:43.729336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.313 [2024-07-15 20:23:43.729342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.313 [2024-07-15 20:23:43.741100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.313 [2024-07-15 20:23:43.741117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.313 [2024-07-15 20:23:43.741127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.754503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.754520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.754526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.766917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.766934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.766940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.779475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.779492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.779498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.791675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.791692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.791698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.804558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.804575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.804581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.816958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.816975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.816981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.830089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.830106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.830119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.843086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.843103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.843109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.855678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.855695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.855701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.867317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.867334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.867340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.879841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.879858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.879864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.892119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.892139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.892145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.903836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.903853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.903859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.917370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.917387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.917393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.929957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.929974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.929980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.942402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.942419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.942425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.955560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.955577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.955583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.967869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.967885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.967891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.980469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.980486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.980492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:43.993110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:43.993130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:43.993136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.575 [2024-07-15 20:23:44.004770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.575 [2024-07-15 20:23:44.004787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.575 [2024-07-15 20:23:44.004794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.837 [2024-07-15 20:23:44.017384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.837 [2024-07-15 20:23:44.017401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.837 [2024-07-15 20:23:44.017408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.837 [2024-07-15 20:23:44.030127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.837 [2024-07-15 20:23:44.030143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.837 [2024-07-15 20:23:44.030150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.838 [2024-07-15 20:23:44.042710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.838 [2024-07-15 20:23:44.042727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-07-15 20:23:44.042737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.838 [2024-07-15 20:23:44.055206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.838 [2024-07-15 20:23:44.055223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-07-15 20:23:44.055229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.838 [2024-07-15 20:23:44.068319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.838 [2024-07-15 20:23:44.068336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-07-15 20:23:44.068342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.838 [2024-07-15 20:23:44.080659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.838 [2024-07-15 20:23:44.080676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-07-15 20:23:44.080682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.838 [2024-07-15 20:23:44.093873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.838 [2024-07-15 20:23:44.093890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-07-15 20:23:44.093896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.838 [2024-07-15 20:23:44.106446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.838 [2024-07-15 20:23:44.106463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-07-15 20:23:44.106469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.838 [2024-07-15 20:23:44.118235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.838 [2024-07-15 20:23:44.118252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-07-15 20:23:44.118258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.838 [2024-07-15 20:23:44.132157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.838 [2024-07-15 20:23:44.132174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-07-15 20:23:44.132180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.838 [2024-07-15 20:23:44.144383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.838 [2024-07-15 20:23:44.144401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-07-15 20:23:44.144407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.838 [2024-07-15 20:23:44.156754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.838 [2024-07-15 20:23:44.156774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-07-15 20:23:44.156780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.838 [2024-07-15 20:23:44.169175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.838 [2024-07-15 20:23:44.169191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-07-15 20:23:44.169197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.838 [2024-07-15 20:23:44.181638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.838 [2024-07-15 20:23:44.181656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-07-15 20:23:44.181662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.838 [2024-07-15 20:23:44.194983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.838 [2024-07-15 20:23:44.195000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-07-15 20:23:44.195006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.838 [2024-07-15 20:23:44.206345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.838 [2024-07-15 20:23:44.206362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-07-15 20:23:44.206368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.838 [2024-07-15 20:23:44.218814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.838 [2024-07-15 20:23:44.218831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-07-15 20:23:44.218837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.838 [2024-07-15 20:23:44.231652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.838 [2024-07-15 20:23:44.231670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-07-15 20:23:44.231676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.838 [2024-07-15 20:23:44.243884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.838 [2024-07-15 20:23:44.243901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-07-15 20:23:44.243908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.838 [2024-07-15 20:23:44.256129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:46.838 [2024-07-15 20:23:44.256146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-07-15 20:23:44.256153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.131 [2024-07-15 20:23:44.268953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.131 [2024-07-15 20:23:44.268970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.131 [2024-07-15 20:23:44.268976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.131 [2024-07-15 20:23:44.282056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.131 [2024-07-15 20:23:44.282074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.131 [2024-07-15 20:23:44.282080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.131 [2024-07-15 20:23:44.294381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.131 [2024-07-15 20:23:44.294398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.131 [2024-07-15 20:23:44.294404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.131 [2024-07-15 20:23:44.306670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.131 [2024-07-15 20:23:44.306687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.131 [2024-07-15 20:23:44.306693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.131 [2024-07-15 20:23:44.319736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.131 [2024-07-15 20:23:44.319753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.131 [2024-07-15 20:23:44.319759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.131 [2024-07-15 20:23:44.332659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.131 [2024-07-15 20:23:44.332675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.131 [2024-07-15 20:23:44.332681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.131 [2024-07-15 20:23:44.345231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.131 [2024-07-15 20:23:44.345248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.131 [2024-07-15 20:23:44.345254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.131 [2024-07-15 20:23:44.358175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.131 [2024-07-15 20:23:44.358192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.131 [2024-07-15 20:23:44.358199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.131 [2024-07-15 20:23:44.369479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.131 [2024-07-15 20:23:44.369496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.131 [2024-07-15 20:23:44.369506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.131 [2024-07-15 20:23:44.382546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.131 [2024-07-15 20:23:44.382563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.131 [2024-07-15 20:23:44.382569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.131 [2024-07-15 20:23:44.395568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.132 [2024-07-15 20:23:44.395585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.132 [2024-07-15 20:23:44.395592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.132 [2024-07-15 20:23:44.407544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.132 [2024-07-15 20:23:44.407562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.132 [2024-07-15 20:23:44.407569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.132 [2024-07-15 20:23:44.419572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.132 [2024-07-15 20:23:44.419589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.132 [2024-07-15 20:23:44.419595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.132 [2024-07-15 20:23:44.432957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.132 [2024-07-15 20:23:44.432974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.132 [2024-07-15 20:23:44.432980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.132 [2024-07-15 20:23:44.445370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.132 [2024-07-15 20:23:44.445387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.132 [2024-07-15 20:23:44.445393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.132 [2024-07-15 20:23:44.457630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.132 [2024-07-15 20:23:44.457648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.132 [2024-07-15 20:23:44.457654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.132 [2024-07-15 20:23:44.471268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.132 [2024-07-15 20:23:44.471285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.132 [2024-07-15 20:23:44.471292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.132 [2024-07-15 20:23:44.483680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.132 [2024-07-15 20:23:44.483699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.132 [2024-07-15 20:23:44.483706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.132 [2024-07-15 20:23:44.496488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.132 [2024-07-15 20:23:44.496505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.132 [2024-07-15 20:23:44.496512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.132 [2024-07-15 20:23:44.509478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.132 [2024-07-15 20:23:44.509495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.132 [2024-07-15 20:23:44.509501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.132 [2024-07-15 20:23:44.521401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.132 [2024-07-15 20:23:44.521418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.132 [2024-07-15 20:23:44.521424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.132 [2024-07-15 20:23:44.534361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.132 [2024-07-15 20:23:44.534378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.132 [2024-07-15 20:23:44.534384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.393 [2024-07-15 20:23:44.546530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.393 [2024-07-15 20:23:44.546547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.393 [2024-07-15 20:23:44.546553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.393 [2024-07-15 20:23:44.559959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.393 [2024-07-15 20:23:44.559976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.393 [2024-07-15 20:23:44.559982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.393 [2024-07-15 20:23:44.571402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.393 [2024-07-15 20:23:44.571419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.393 [2024-07-15 20:23:44.571426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.393 [2024-07-15 20:23:44.585391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.393 [2024-07-15 20:23:44.585408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.393 [2024-07-15 20:23:44.585415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.393 [2024-07-15 20:23:44.597372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe958e0) 00:28:47.394 [2024-07-15 20:23:44.597390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.394 [2024-07-15 20:23:44.597396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.394 00:28:47.394 Latency(us) 00:28:47.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.394 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:47.394 nvme0n1 : 2.00 20229.08 79.02 0.00 0.00 6320.21 3932.16 14417.92 00:28:47.394 =================================================================================================================== 00:28:47.394 Total : 20229.08 79.02 0.00 0.00 6320.21 3932.16 14417.92 00:28:47.394 0 00:28:47.394 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:47.394 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:47.394 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:47.394 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:47.394 | .driver_specific 00:28:47.394 | .nvme_error 00:28:47.394 | .status_code 00:28:47.394 | .command_transient_transport_error' 00:28:47.394 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:28:47.394 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1162782 00:28:47.394 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1162782 ']' 00:28:47.394 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1162782 00:28:47.394 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:47.394 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:47.394 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1162782 00:28:47.654 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:47.654 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:47.654 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1162782' 00:28:47.654 killing process with pid 1162782 00:28:47.655 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1162782 00:28:47.655 Received shutdown signal, test time was about 2.000000 seconds 00:28:47.655 00:28:47.655 Latency(us) 00:28:47.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.655 =================================================================================================================== 00:28:47.655 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:47.655 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1162782 00:28:47.655 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:47.655 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:47.655 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:47.655 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:47.655 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:47.655 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1163477 00:28:47.655 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1163477 /var/tmp/bperf.sock 00:28:47.655 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1163477 ']' 00:28:47.655 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:47.655 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:47.655 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:47.655 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:47.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:47.655 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:47.655 20:23:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.655 [2024-07-15 20:23:45.015925] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:28:47.655 [2024-07-15 20:23:45.015981] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163477 ] 00:28:47.655 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:47.655 Zero copy mechanism will not be used. 00:28:47.655 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.916 [2024-07-15 20:23:45.089411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.916 [2024-07-15 20:23:45.142448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.487 20:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:48.487 20:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:48.487 20:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:48.487 20:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:48.748 20:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:48.748 20:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.749 20:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:48.749 20:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.749 20:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.749 20:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.749 nvme0n1 00:28:49.010 20:23:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:49.010 20:23:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.010 20:23:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:49.010 20:23:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.010 20:23:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:49.010 20:23:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:49.010 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:49.010 Zero copy mechanism will not be used. 00:28:49.010 Running I/O for 2 seconds... 00:28:49.010 [2024-07-15 20:23:46.310264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.010 [2024-07-15 20:23:46.310295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.010 [2024-07-15 20:23:46.310303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.010 [2024-07-15 20:23:46.325787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.010 [2024-07-15 20:23:46.325807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.010 [2024-07-15 20:23:46.325815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.010 [2024-07-15 20:23:46.339622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.010 [2024-07-15 20:23:46.339642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.010 [2024-07-15 20:23:46.339649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.010 [2024-07-15 20:23:46.353409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.010 [2024-07-15 20:23:46.353428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.010 [2024-07-15 20:23:46.353434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.010 [2024-07-15 20:23:46.369779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.010 [2024-07-15 20:23:46.369797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.010 [2024-07-15 20:23:46.369804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.010 [2024-07-15 20:23:46.385441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.010 [2024-07-15 20:23:46.385459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.010 [2024-07-15 20:23:46.385465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.010 [2024-07-15 20:23:46.400031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.010 [2024-07-15 20:23:46.400049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.010 [2024-07-15 20:23:46.400055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.010 [2024-07-15 20:23:46.416015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.010 [2024-07-15 20:23:46.416033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.010 [2024-07-15 20:23:46.416040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.010 [2024-07-15 20:23:46.431270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.010 [2024-07-15 20:23:46.431293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.010 [2024-07-15 20:23:46.431299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.272 [2024-07-15 20:23:46.445792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.272 [2024-07-15 20:23:46.445810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.272 [2024-07-15 20:23:46.445817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.272 [2024-07-15 20:23:46.463549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.272 [2024-07-15 20:23:46.463567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.272 [2024-07-15 20:23:46.463574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.272 [2024-07-15 20:23:46.478648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.272 [2024-07-15 20:23:46.478665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.272 [2024-07-15 20:23:46.478672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.272 [2024-07-15 20:23:46.495727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.272 [2024-07-15 20:23:46.495744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.272 [2024-07-15 20:23:46.495750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.272 [2024-07-15 20:23:46.511025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.272 [2024-07-15 20:23:46.511042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.272 [2024-07-15 20:23:46.511049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.272 [2024-07-15 20:23:46.528035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.272 [2024-07-15 20:23:46.528052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.272 [2024-07-15 20:23:46.528058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.272 [2024-07-15 20:23:46.544061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.272 [2024-07-15 20:23:46.544079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.272 [2024-07-15 20:23:46.544085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.273 [2024-07-15 20:23:46.560852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.273 [2024-07-15 20:23:46.560870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.273 [2024-07-15 20:23:46.560876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.273 [2024-07-15 20:23:46.575481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.273 [2024-07-15 20:23:46.575499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.273 [2024-07-15 20:23:46.575506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.273 [2024-07-15 20:23:46.587966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.273 [2024-07-15 20:23:46.587983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.273 [2024-07-15 20:23:46.587989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.273 [2024-07-15 20:23:46.596488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.273 [2024-07-15 20:23:46.596506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.273 [2024-07-15 20:23:46.596512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.273 [2024-07-15 20:23:46.606424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.273 [2024-07-15 20:23:46.606441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.273 [2024-07-15 20:23:46.606447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.273 [2024-07-15 20:23:46.621564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.273 [2024-07-15 20:23:46.621582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.273 [2024-07-15 20:23:46.621588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.273 [2024-07-15 20:23:46.633325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.273 [2024-07-15 20:23:46.633343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.273 [2024-07-15 20:23:46.633349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.273 [2024-07-15 20:23:46.651440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.273 [2024-07-15 20:23:46.651458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.273 [2024-07-15 20:23:46.651464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.273 [2024-07-15 20:23:46.666557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.273 [2024-07-15 20:23:46.666575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.273 [2024-07-15 20:23:46.666581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.273 [2024-07-15 20:23:46.681897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.273 [2024-07-15 20:23:46.681914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.273 [2024-07-15 20:23:46.681924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.273 [2024-07-15 20:23:46.697030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.273 [2024-07-15 20:23:46.697047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.273 [2024-07-15 20:23:46.697053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.534 [2024-07-15 20:23:46.713052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.534 [2024-07-15 20:23:46.713070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.534 [2024-07-15 20:23:46.713076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.534 [2024-07-15 20:23:46.727543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.534 [2024-07-15 20:23:46.727561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.534 [2024-07-15 20:23:46.727567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.534 [2024-07-15 20:23:46.741456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.534 [2024-07-15 20:23:46.741474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.534 [2024-07-15 20:23:46.741480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.534 [2024-07-15 20:23:46.755553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.534 [2024-07-15 20:23:46.755570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.534 [2024-07-15 20:23:46.755576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.534 [2024-07-15 20:23:46.771434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.534 [2024-07-15 20:23:46.771451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.534 [2024-07-15 20:23:46.771458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.534 [2024-07-15 20:23:46.787813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.534 [2024-07-15 20:23:46.787830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.534 [2024-07-15 20:23:46.787836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.534 [2024-07-15 20:23:46.804669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.534 [2024-07-15 20:23:46.804686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.534 [2024-07-15 20:23:46.804692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.534 [2024-07-15 20:23:46.820337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.534 [2024-07-15 20:23:46.820355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.534 [2024-07-15 20:23:46.820361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.534 [2024-07-15 20:23:46.832927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.534 [2024-07-15 20:23:46.832944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.534 [2024-07-15 20:23:46.832951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.534 [2024-07-15 20:23:46.848453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.534 [2024-07-15 20:23:46.848471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.534 [2024-07-15 20:23:46.848477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.534 [2024-07-15 20:23:46.864158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.534 [2024-07-15 20:23:46.864175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.534 [2024-07-15 20:23:46.864182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.534 [2024-07-15 20:23:46.879739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.534 [2024-07-15 20:23:46.879757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.534 [2024-07-15 20:23:46.879763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.534 [2024-07-15 20:23:46.894086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.534 [2024-07-15 20:23:46.894103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.534 [2024-07-15 20:23:46.894109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.535 [2024-07-15 20:23:46.909824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.535 [2024-07-15 20:23:46.909841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.535 [2024-07-15 20:23:46.909847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.535 [2024-07-15 20:23:46.925721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.535 [2024-07-15 20:23:46.925739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.535 [2024-07-15 20:23:46.925745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.535 [2024-07-15 20:23:46.941160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.535 [2024-07-15 20:23:46.941177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.535 [2024-07-15 20:23:46.941186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.535 [2024-07-15 20:23:46.957166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.535 [2024-07-15 20:23:46.957184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.535 [2024-07-15 20:23:46.957190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.796 [2024-07-15 20:23:46.970832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.796 [2024-07-15 20:23:46.970851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-07-15 20:23:46.970857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.796 [2024-07-15 20:23:46.984856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.796 [2024-07-15 20:23:46.984874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-07-15 20:23:46.984880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.796 [2024-07-15 20:23:47.000825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.796 [2024-07-15 20:23:47.000844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-07-15 20:23:47.000850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.796 [2024-07-15 20:23:47.014702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.796 [2024-07-15 20:23:47.014720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-07-15 20:23:47.014726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.796 [2024-07-15 20:23:47.026251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.796 [2024-07-15 20:23:47.026270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-07-15 20:23:47.026276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.796 [2024-07-15 20:23:47.040829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.796 [2024-07-15 20:23:47.040848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-07-15 20:23:47.040855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.796 [2024-07-15 20:23:47.056316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.796 [2024-07-15 20:23:47.056335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-07-15 20:23:47.056341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.796 [2024-07-15 20:23:47.072314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.796 [2024-07-15 20:23:47.072339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-07-15 20:23:47.072345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.796 [2024-07-15 20:23:47.087028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.796 [2024-07-15 20:23:47.087047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-07-15 20:23:47.087053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.796 [2024-07-15 20:23:47.100944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.796 [2024-07-15 20:23:47.100962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-07-15 20:23:47.100968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.796 [2024-07-15 20:23:47.115091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.796 [2024-07-15 20:23:47.115109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-07-15 20:23:47.115115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.796 [2024-07-15 20:23:47.132136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.796 [2024-07-15 20:23:47.132154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-07-15 20:23:47.132160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.796 [2024-07-15 20:23:47.147855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.796 [2024-07-15 20:23:47.147872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-07-15 20:23:47.147878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.796 [2024-07-15 20:23:47.163969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.796 [2024-07-15 20:23:47.163987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-07-15 20:23:47.163993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.796 [2024-07-15 20:23:47.178358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.796 [2024-07-15 20:23:47.178377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-07-15 20:23:47.178383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.796 [2024-07-15 20:23:47.192846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.796 [2024-07-15 20:23:47.192865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-07-15 20:23:47.192871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.796 [2024-07-15 20:23:47.206806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.796 [2024-07-15 20:23:47.206825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-07-15 20:23:47.206831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.796 [2024-07-15 20:23:47.220440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:49.796 [2024-07-15 20:23:47.220459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-07-15 20:23:47.220465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.058 [2024-07-15 20:23:47.234305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.058 [2024-07-15 20:23:47.234325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.058 [2024-07-15 20:23:47.234331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.058 [2024-07-15 20:23:47.247624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.058 [2024-07-15 20:23:47.247642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.058 [2024-07-15 20:23:47.247648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.058 [2024-07-15 20:23:47.260489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.058 [2024-07-15 20:23:47.260508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.058 [2024-07-15 20:23:47.260513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.058 [2024-07-15 20:23:47.275718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.058 [2024-07-15 20:23:47.275737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.058 [2024-07-15 20:23:47.275742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.058 [2024-07-15 20:23:47.291230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.058 [2024-07-15 20:23:47.291248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.058 [2024-07-15 20:23:47.291254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.058 [2024-07-15 20:23:47.307315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.058 [2024-07-15 20:23:47.307333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.058 [2024-07-15 20:23:47.307339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.058 [2024-07-15 20:23:47.323120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.058 [2024-07-15 20:23:47.323151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.058 [2024-07-15 20:23:47.323160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.058 [2024-07-15 20:23:47.338672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.058 [2024-07-15 20:23:47.338692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.058 [2024-07-15 20:23:47.338698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.058 [2024-07-15 20:23:47.353077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.058 [2024-07-15 20:23:47.353095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.058 [2024-07-15 20:23:47.353101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.058 [2024-07-15 20:23:47.368155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.058 [2024-07-15 20:23:47.368174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.058 [2024-07-15 20:23:47.368179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.058 [2024-07-15 20:23:47.384197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.058 [2024-07-15 20:23:47.384215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.058 [2024-07-15 20:23:47.384221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.058 [2024-07-15 20:23:47.400071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.058 [2024-07-15 20:23:47.400090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.058 [2024-07-15 20:23:47.400095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.058 [2024-07-15 20:23:47.415324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.058 [2024-07-15 20:23:47.415343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.058 [2024-07-15 20:23:47.415348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.058 [2024-07-15 20:23:47.427974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.058 [2024-07-15 20:23:47.427993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.058 [2024-07-15 20:23:47.427998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.058 [2024-07-15 20:23:47.442771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.058 [2024-07-15 20:23:47.442789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.058 [2024-07-15 20:23:47.442795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.058 [2024-07-15 20:23:47.456865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.058 [2024-07-15 20:23:47.456887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.058 [2024-07-15 20:23:47.456892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.058 [2024-07-15 20:23:47.471743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.058 [2024-07-15 20:23:47.471762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.058 [2024-07-15 20:23:47.471768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.058 [2024-07-15 20:23:47.486041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.058 [2024-07-15 20:23:47.486059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.058 [2024-07-15 20:23:47.486065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.320 [2024-07-15 20:23:47.501523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.320 [2024-07-15 20:23:47.501542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.320 [2024-07-15 20:23:47.501548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.320 [2024-07-15 20:23:47.516832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.320 [2024-07-15 20:23:47.516852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.320 [2024-07-15 20:23:47.516858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.320 [2024-07-15 20:23:47.530816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.320 [2024-07-15 20:23:47.530836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.320 [2024-07-15 20:23:47.530843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.320 [2024-07-15 20:23:47.545822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.320 [2024-07-15 20:23:47.545841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.320 [2024-07-15 20:23:47.545847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.320 [2024-07-15 20:23:47.561085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.320 [2024-07-15 20:23:47.561103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.320 [2024-07-15 20:23:47.561109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.320 [2024-07-15 20:23:47.576829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.320 [2024-07-15 20:23:47.576848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.320 [2024-07-15 20:23:47.576854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.320 [2024-07-15 20:23:47.592687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.320 [2024-07-15 20:23:47.592705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.320 [2024-07-15 20:23:47.592711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.321 [2024-07-15 20:23:47.606847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.321 [2024-07-15 20:23:47.606866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.321 [2024-07-15 20:23:47.606872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.321 [2024-07-15 20:23:47.620956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.321 [2024-07-15 20:23:47.620975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.321 [2024-07-15 20:23:47.620981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.321 [2024-07-15 20:23:47.636944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.321 [2024-07-15 20:23:47.636963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.321 [2024-07-15 20:23:47.636969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.321 [2024-07-15 20:23:47.651919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.321 [2024-07-15 20:23:47.651938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.321 [2024-07-15 20:23:47.651944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.321 [2024-07-15 20:23:47.667107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.321 [2024-07-15 20:23:47.667133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.321 [2024-07-15 20:23:47.667140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.321 [2024-07-15 20:23:47.682259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.321 [2024-07-15 20:23:47.682278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.321 [2024-07-15 20:23:47.682284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.321 [2024-07-15 20:23:47.698719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.321 [2024-07-15 20:23:47.698737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.321 [2024-07-15 20:23:47.698743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.321 [2024-07-15 20:23:47.715014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.321 [2024-07-15 20:23:47.715033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.321 [2024-07-15 20:23:47.715042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.321 [2024-07-15 20:23:47.728954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.321 [2024-07-15 20:23:47.728974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.321 [2024-07-15 20:23:47.728980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.321 [2024-07-15 20:23:47.745380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.321 [2024-07-15 20:23:47.745399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.321 [2024-07-15 20:23:47.745405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.582 [2024-07-15 20:23:47.759857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.582 [2024-07-15 20:23:47.759876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.582 [2024-07-15 20:23:47.759882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.582 [2024-07-15 20:23:47.773158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.582 [2024-07-15 20:23:47.773176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.582 [2024-07-15 20:23:47.773182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.582 [2024-07-15 20:23:47.788766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.582 [2024-07-15 20:23:47.788785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.582 [2024-07-15 20:23:47.788791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.582 [2024-07-15 20:23:47.803912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.582 [2024-07-15 20:23:47.803931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.582 [2024-07-15 20:23:47.803937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.582 [2024-07-15 20:23:47.817587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.582 [2024-07-15 20:23:47.817605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.582 [2024-07-15 20:23:47.817612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.582 [2024-07-15 20:23:47.831645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.583 [2024-07-15 20:23:47.831663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.583 [2024-07-15 20:23:47.831669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.583 [2024-07-15 20:23:47.846813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.583 [2024-07-15 20:23:47.846835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.583 [2024-07-15 20:23:47.846841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.583 [2024-07-15 20:23:47.862555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.583 [2024-07-15 20:23:47.862573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.583 [2024-07-15 20:23:47.862579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.583 [2024-07-15 20:23:47.877047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.583 [2024-07-15 20:23:47.877066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.583 [2024-07-15 20:23:47.877072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.583 [2024-07-15 20:23:47.893084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.583 [2024-07-15 20:23:47.893102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.583 [2024-07-15 20:23:47.893108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.583 [2024-07-15 20:23:47.907264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.583 [2024-07-15 20:23:47.907283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.583 [2024-07-15 20:23:47.907289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.583 [2024-07-15 20:23:47.921489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.583 [2024-07-15 20:23:47.921507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.583 [2024-07-15 20:23:47.921513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.583 [2024-07-15 20:23:47.934552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.583 [2024-07-15 20:23:47.934571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.583 [2024-07-15 20:23:47.934577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.583 [2024-07-15 20:23:47.948398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.583 [2024-07-15 20:23:47.948416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.583 [2024-07-15 20:23:47.948422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.583 [2024-07-15 20:23:47.963350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.583 [2024-07-15 20:23:47.963369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.583 [2024-07-15 20:23:47.963375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.583 [2024-07-15 20:23:47.976665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.583 [2024-07-15 20:23:47.976684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.583 [2024-07-15 20:23:47.976690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.583 [2024-07-15 20:23:47.993152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.583 [2024-07-15 20:23:47.993170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.583 [2024-07-15 20:23:47.993177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.583 [2024-07-15 20:23:48.007773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.583 [2024-07-15 20:23:48.007792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.583 [2024-07-15 20:23:48.007798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.845 [2024-07-15 20:23:48.022135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.845 [2024-07-15 20:23:48.022154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.845 [2024-07-15 20:23:48.022160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.845 [2024-07-15 20:23:48.034836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.845 [2024-07-15 20:23:48.034854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.845 [2024-07-15 20:23:48.034860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.845 [2024-07-15 20:23:48.046529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.845 [2024-07-15 20:23:48.046546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.845 [2024-07-15 20:23:48.046552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.845 [2024-07-15 20:23:48.059363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.845 [2024-07-15 20:23:48.059381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.845 [2024-07-15 20:23:48.059387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.845 [2024-07-15 20:23:48.074000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.845 [2024-07-15 20:23:48.074018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.845 [2024-07-15 20:23:48.074024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.845 [2024-07-15 20:23:48.090197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.845 [2024-07-15 20:23:48.090218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.845 [2024-07-15 20:23:48.090224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.845 [2024-07-15 20:23:48.104206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.845 [2024-07-15 20:23:48.104224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.845 [2024-07-15 20:23:48.104230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.845 [2024-07-15 20:23:48.119963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.845 [2024-07-15 20:23:48.119981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.845 [2024-07-15 20:23:48.119987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.845 [2024-07-15 20:23:48.134515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.845 [2024-07-15 20:23:48.134534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.845 [2024-07-15 20:23:48.134540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.845 [2024-07-15 20:23:48.146929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.845 [2024-07-15 20:23:48.146947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.845 [2024-07-15 20:23:48.146953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.845 [2024-07-15 20:23:48.161113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.845 [2024-07-15 20:23:48.161137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.845 [2024-07-15 20:23:48.161144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.845 [2024-07-15 20:23:48.177152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.845 [2024-07-15 20:23:48.177171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.845 [2024-07-15 20:23:48.177177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.845 [2024-07-15 20:23:48.192581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.845 [2024-07-15 20:23:48.192601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.845 [2024-07-15 20:23:48.192606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.845 [2024-07-15 20:23:48.205641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.845 [2024-07-15 20:23:48.205659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.845 [2024-07-15 20:23:48.205665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.845 [2024-07-15 20:23:48.219885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.845 [2024-07-15 20:23:48.219905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.845 [2024-07-15 20:23:48.219912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.845 [2024-07-15 20:23:48.234286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.845 [2024-07-15 20:23:48.234306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.845 [2024-07-15 20:23:48.234313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.845 [2024-07-15 20:23:48.245695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.845 [2024-07-15 20:23:48.245714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.845 [2024-07-15 20:23:48.245720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.845 [2024-07-15 20:23:48.260806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.845 [2024-07-15 20:23:48.260825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.846 [2024-07-15 20:23:48.260831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.846 [2024-07-15 20:23:48.274853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:50.846 [2024-07-15 20:23:48.274872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.846 [2024-07-15 20:23:48.274878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.106 [2024-07-15 20:23:48.289003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x753b80) 00:28:51.107 [2024-07-15 20:23:48.289022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.107 [2024-07-15 20:23:48.289028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.107 00:28:51.107 Latency(us) 00:28:51.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.107 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:51.107 nvme0n1 : 2.00 2098.10 262.26 0.00 0.00 7621.81 1979.73 17476.27 00:28:51.107 =================================================================================================================== 00:28:51.107 Total : 2098.10 262.26 0.00 0.00 7621.81 1979.73 17476.27 00:28:51.107 0 00:28:51.107 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:51.107 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:51.107 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:51.107 | .driver_specific 00:28:51.107 | .nvme_error 00:28:51.107 | .status_code 00:28:51.107 | .command_transient_transport_error' 00:28:51.107 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:51.107 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 135 > 0 )) 00:28:51.107 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1163477 00:28:51.107 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1163477 ']' 00:28:51.107 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1163477 00:28:51.107 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:51.107 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:51.107 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1163477 00:28:51.107 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:51.107 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:51.107 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1163477' 00:28:51.107 killing process with pid 1163477 00:28:51.107 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1163477 00:28:51.107 Received shutdown signal, test time was about 2.000000 seconds 00:28:51.107 00:28:51.107 Latency(us) 00:28:51.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.107 =================================================================================================================== 00:28:51.107 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:51.107 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1163477 00:28:51.367 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:51.367 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:51.367 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:51.367 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:51.367 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:51.367 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1164165 00:28:51.367 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1164165 /var/tmp/bperf.sock 00:28:51.367 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1164165 ']' 00:28:51.367 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:51.367 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:51.367 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:51.367 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:51.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:51.367 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:51.367 20:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:51.367 [2024-07-15 20:23:48.695851] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:28:51.367 [2024-07-15 20:23:48.695914] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1164165 ] 00:28:51.367 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.367 [2024-07-15 20:23:48.772872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.627 [2024-07-15 20:23:48.826337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.199 20:23:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:52.199 20:23:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:52.199 20:23:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:52.199 20:23:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:52.199 20:23:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:52.199 20:23:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.199 20:23:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:52.199 20:23:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.199 20:23:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:52.199 20:23:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:52.460 nvme0n1 00:28:52.460 20:23:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:52.460 20:23:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.460 20:23:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:52.460 20:23:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.460 20:23:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:52.460 20:23:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:52.722 Running I/O for 2 seconds... 00:28:52.722 [2024-07-15 20:23:49.970865] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:52.722 [2024-07-15 20:23:49.971757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.722 [2024-07-15 20:23:49.971785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.722 [2024-07-15 20:23:49.982794] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:52.722 [2024-07-15 20:23:49.983677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.722 [2024-07-15 20:23:49.983694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.722 [2024-07-15 20:23:49.994692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:52.722 [2024-07-15 20:23:49.995607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.722 [2024-07-15 20:23:49.995625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.722 [2024-07-15 20:23:50.007090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:52.722 [2024-07-15 20:23:50.007994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.722 [2024-07-15 20:23:50.008011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.722 [2024-07-15 20:23:50.018952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:52.722 [2024-07-15 20:23:50.019872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.722 [2024-07-15 20:23:50.019889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.722 [2024-07-15 20:23:50.030900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:52.722 [2024-07-15 20:23:50.031814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.722 [2024-07-15 20:23:50.031831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.722 [2024-07-15 20:23:50.043177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:52.722 [2024-07-15 20:23:50.044103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.722 [2024-07-15 20:23:50.044120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.722 [2024-07-15 20:23:50.055038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:52.722 [2024-07-15 20:23:50.055948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.722 [2024-07-15 20:23:50.055963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.722 [2024-07-15 20:23:50.066919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:52.722 [2024-07-15 20:23:50.067844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.722 [2024-07-15 20:23:50.067859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.722 [2024-07-15 20:23:50.078769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:52.722 [2024-07-15 20:23:50.079640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.722 [2024-07-15 20:23:50.079655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.722 [2024-07-15 20:23:50.090620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:52.722 [2024-07-15 20:23:50.091504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.722 [2024-07-15 20:23:50.091520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.722 [2024-07-15 20:23:50.102490] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:52.722 [2024-07-15 20:23:50.103391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.722 [2024-07-15 20:23:50.103405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.722 [2024-07-15 20:23:50.115056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:52.722 [2024-07-15 20:23:50.115962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.722 [2024-07-15 20:23:50.115982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.722 [2024-07-15 20:23:50.126928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:52.722 [2024-07-15 20:23:50.127841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.722 [2024-07-15 20:23:50.127856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.722 [2024-07-15 20:23:50.138766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:52.722 [2024-07-15 20:23:50.139637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.722 [2024-07-15 20:23:50.139653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.722 [2024-07-15 20:23:50.150606] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:52.722 [2024-07-15 20:23:50.151510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.722 [2024-07-15 20:23:50.151525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.983 [2024-07-15 20:23:50.162450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:52.983 [2024-07-15 20:23:50.163388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.983 [2024-07-15 20:23:50.163403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.174275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:52.984 [2024-07-15 20:23:50.175189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.175205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.186053] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:52.984 [2024-07-15 20:23:50.186977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.186992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.197899] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:52.984 [2024-07-15 20:23:50.198805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.198820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.209678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:52.984 [2024-07-15 20:23:50.210549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.210565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.221487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:52.984 [2024-07-15 20:23:50.222405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.222421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.233297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:52.984 [2024-07-15 20:23:50.234185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.234200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.245068] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:52.984 [2024-07-15 20:23:50.245987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.246002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.256846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:52.984 [2024-07-15 20:23:50.257751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.257766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.268647] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:52.984 [2024-07-15 20:23:50.269555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.269569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.280409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:52.984 [2024-07-15 20:23:50.281296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.281311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.292222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:52.984 [2024-07-15 20:23:50.293135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.293149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.303966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:52.984 [2024-07-15 20:23:50.304866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.304881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.315765] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:52.984 [2024-07-15 20:23:50.316669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.316684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.327563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:52.984 [2024-07-15 20:23:50.328432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.328447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.339337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:52.984 [2024-07-15 20:23:50.340234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.340250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.351136] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:52.984 [2024-07-15 20:23:50.352034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.352049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.362926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:52.984 [2024-07-15 20:23:50.363826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.363841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.374817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:52.984 [2024-07-15 20:23:50.375725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.375740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.386624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:52.984 [2024-07-15 20:23:50.387535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.387551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.398390] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:52.984 [2024-07-15 20:23:50.399286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.399301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:52.984 [2024-07-15 20:23:50.410201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:52.984 [2024-07-15 20:23:50.411103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.984 [2024-07-15 20:23:50.411118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.245 [2024-07-15 20:23:50.422036] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:53.245 [2024-07-15 20:23:50.422943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.245 [2024-07-15 20:23:50.422961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.245 [2024-07-15 20:23:50.433822] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:53.245 [2024-07-15 20:23:50.434719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.245 [2024-07-15 20:23:50.434735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.445611] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:53.246 [2024-07-15 20:23:50.446511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.446526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.457432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:53.246 [2024-07-15 20:23:50.458296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.458312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.469213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:53.246 [2024-07-15 20:23:50.470114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.470131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.481160] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:53.246 [2024-07-15 20:23:50.482073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.482088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.492938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:53.246 [2024-07-15 20:23:50.493815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.493830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.504709] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:53.246 [2024-07-15 20:23:50.505573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.505587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.516508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:53.246 [2024-07-15 20:23:50.517414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.517428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.528290] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:53.246 [2024-07-15 20:23:50.529189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.529207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.540131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:53.246 [2024-07-15 20:23:50.541041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.541056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.551909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:53.246 [2024-07-15 20:23:50.552806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.552821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.563680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:53.246 [2024-07-15 20:23:50.564588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.564603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.575491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:53.246 [2024-07-15 20:23:50.576398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.576413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.587275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:53.246 [2024-07-15 20:23:50.588170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.588185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.599082] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:53.246 [2024-07-15 20:23:50.599981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.599995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.610884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:53.246 [2024-07-15 20:23:50.611799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.611814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.622668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:53.246 [2024-07-15 20:23:50.623533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.623547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.634429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:53.246 [2024-07-15 20:23:50.635328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.635344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.646232] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:53.246 [2024-07-15 20:23:50.647144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.647160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.658028] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:53.246 [2024-07-15 20:23:50.658935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.658951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.246 [2024-07-15 20:23:50.669879] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:53.246 [2024-07-15 20:23:50.670777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.246 [2024-07-15 20:23:50.670793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.681664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:53.508 [2024-07-15 20:23:50.682546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.682562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.693436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:53.508 [2024-07-15 20:23:50.694382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.694398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.705249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:53.508 [2024-07-15 20:23:50.706151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.706166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.717047] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:53.508 [2024-07-15 20:23:50.717952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.717967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.728818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:53.508 [2024-07-15 20:23:50.729717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.729732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.740619] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:53.508 [2024-07-15 20:23:50.741501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.741516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.752386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:53.508 [2024-07-15 20:23:50.753285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.753300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.764166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:53.508 [2024-07-15 20:23:50.765064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.765079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.775948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:53.508 [2024-07-15 20:23:50.776844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.776860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.787742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:53.508 [2024-07-15 20:23:50.788647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.788662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.799569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:53.508 [2024-07-15 20:23:50.800475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.800491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.811377] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:53.508 [2024-07-15 20:23:50.812292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.812307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.823158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:53.508 [2024-07-15 20:23:50.824029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.824044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.834997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:53.508 [2024-07-15 20:23:50.835879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.835897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.846782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:53.508 [2024-07-15 20:23:50.847686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.847701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.858601] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:53.508 [2024-07-15 20:23:50.859473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.859487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.870412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:53.508 [2024-07-15 20:23:50.871287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.871303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.882207] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:53.508 [2024-07-15 20:23:50.883116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.883133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.893995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:53.508 [2024-07-15 20:23:50.894901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.894916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.905800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:53.508 [2024-07-15 20:23:50.906696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.508 [2024-07-15 20:23:50.906712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.508 [2024-07-15 20:23:50.917604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:53.508 [2024-07-15 20:23:50.918471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.509 [2024-07-15 20:23:50.918486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.509 [2024-07-15 20:23:50.929408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:53.509 [2024-07-15 20:23:50.930305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.509 [2024-07-15 20:23:50.930321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.770 [2024-07-15 20:23:50.941175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:53.770 [2024-07-15 20:23:50.942087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.770 [2024-07-15 20:23:50.942103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.770 [2024-07-15 20:23:50.952964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:53.770 [2024-07-15 20:23:50.953871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.770 [2024-07-15 20:23:50.953887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.770 [2024-07-15 20:23:50.964791] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:53.770 [2024-07-15 20:23:50.965696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.770 [2024-07-15 20:23:50.965710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.770 [2024-07-15 20:23:50.976618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:53.770 [2024-07-15 20:23:50.977535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.770 [2024-07-15 20:23:50.977550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.770 [2024-07-15 20:23:50.988431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:53.770 [2024-07-15 20:23:50.989348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.770 [2024-07-15 20:23:50.989364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.770 [2024-07-15 20:23:51.000227] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:53.770 [2024-07-15 20:23:51.001130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.770 [2024-07-15 20:23:51.001146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.770 [2024-07-15 20:23:51.011997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:53.770 [2024-07-15 20:23:51.012871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.770 [2024-07-15 20:23:51.012886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.770 [2024-07-15 20:23:51.023793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:53.770 [2024-07-15 20:23:51.024703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.770 [2024-07-15 20:23:51.024717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.770 [2024-07-15 20:23:51.035591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:53.770 [2024-07-15 20:23:51.036659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.770 [2024-07-15 20:23:51.036674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.770 [2024-07-15 20:23:51.047589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:53.770 [2024-07-15 20:23:51.048489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.770 [2024-07-15 20:23:51.048504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.770 [2024-07-15 20:23:51.059434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:53.770 [2024-07-15 20:23:51.060304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.770 [2024-07-15 20:23:51.060319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.770 [2024-07-15 20:23:51.071225] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:53.770 [2024-07-15 20:23:51.072137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.770 [2024-07-15 20:23:51.072153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.770 [2024-07-15 20:23:51.083006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:53.771 [2024-07-15 20:23:51.083907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.771 [2024-07-15 20:23:51.083923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.771 [2024-07-15 20:23:51.094823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:53.771 [2024-07-15 20:23:51.095740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.771 [2024-07-15 20:23:51.095755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.771 [2024-07-15 20:23:51.106632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:53.771 [2024-07-15 20:23:51.107522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.771 [2024-07-15 20:23:51.107537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.771 [2024-07-15 20:23:51.118437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:53.771 [2024-07-15 20:23:51.119348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.771 [2024-07-15 20:23:51.119362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.771 [2024-07-15 20:23:51.130222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:53.771 [2024-07-15 20:23:51.131130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.771 [2024-07-15 20:23:51.131145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.771 [2024-07-15 20:23:51.142008] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:53.771 [2024-07-15 20:23:51.142916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.771 [2024-07-15 20:23:51.142934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.771 [2024-07-15 20:23:51.153818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:53.771 [2024-07-15 20:23:51.154729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.771 [2024-07-15 20:23:51.154744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.771 [2024-07-15 20:23:51.165643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:53.771 [2024-07-15 20:23:51.166513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.771 [2024-07-15 20:23:51.166529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.771 [2024-07-15 20:23:51.177433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:53.771 [2024-07-15 20:23:51.178337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.771 [2024-07-15 20:23:51.178352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.771 [2024-07-15 20:23:51.189262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:53.771 [2024-07-15 20:23:51.190128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.771 [2024-07-15 20:23:51.190143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:53.771 [2024-07-15 20:23:51.201056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:54.032 [2024-07-15 20:23:51.201962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.201978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.212875] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:54.032 [2024-07-15 20:23:51.213778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.213793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.224730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:54.032 [2024-07-15 20:23:51.225602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.225617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.236541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:54.032 [2024-07-15 20:23:51.237419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.237434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.248337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:54.032 [2024-07-15 20:23:51.249263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.249278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.260143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:54.032 [2024-07-15 20:23:51.261038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.261053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.271955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:54.032 [2024-07-15 20:23:51.272866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.272881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.283773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:54.032 [2024-07-15 20:23:51.284639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.284653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.295577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:54.032 [2024-07-15 20:23:51.296477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.296493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.307396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:54.032 [2024-07-15 20:23:51.308296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.308312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.319225] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:54.032 [2024-07-15 20:23:51.320127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.320144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.331030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:54.032 [2024-07-15 20:23:51.331946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.331962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.342847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:54.032 [2024-07-15 20:23:51.343740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.343755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.354670] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:54.032 [2024-07-15 20:23:51.355590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.355606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.366482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:54.032 [2024-07-15 20:23:51.367399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.367415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.378407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:54.032 [2024-07-15 20:23:51.379313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.379328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.390197] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:54.032 [2024-07-15 20:23:51.391092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.391108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.401998] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:54.032 [2024-07-15 20:23:51.402904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.402919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.413831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:54.032 [2024-07-15 20:23:51.414714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.414730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.425641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:54.032 [2024-07-15 20:23:51.426532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.426547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.437504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:54.032 [2024-07-15 20:23:51.438409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.438425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.449322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:54.032 [2024-07-15 20:23:51.450236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.450257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.032 [2024-07-15 20:23:51.461112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:54.032 [2024-07-15 20:23:51.462030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.032 [2024-07-15 20:23:51.462046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.293 [2024-07-15 20:23:51.472972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:54.293 [2024-07-15 20:23:51.473882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.293 [2024-07-15 20:23:51.473898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.293 [2024-07-15 20:23:51.484790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:54.293 [2024-07-15 20:23:51.485691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.293 [2024-07-15 20:23:51.485706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.293 [2024-07-15 20:23:51.496592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:54.293 [2024-07-15 20:23:51.497502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.293 [2024-07-15 20:23:51.497517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.293 [2024-07-15 20:23:51.508408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:54.293 [2024-07-15 20:23:51.509266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.293 [2024-07-15 20:23:51.509281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.293 [2024-07-15 20:23:51.520231] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:54.293 [2024-07-15 20:23:51.521140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.293 [2024-07-15 20:23:51.521155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.293 [2024-07-15 20:23:51.532054] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:54.293 [2024-07-15 20:23:51.532976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.293 [2024-07-15 20:23:51.532991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.293 [2024-07-15 20:23:51.543868] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:54.293 [2024-07-15 20:23:51.544779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.293 [2024-07-15 20:23:51.544794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.293 [2024-07-15 20:23:51.555654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:54.293 [2024-07-15 20:23:51.556539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.293 [2024-07-15 20:23:51.556554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.293 [2024-07-15 20:23:51.567518] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:54.293 [2024-07-15 20:23:51.568389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.293 [2024-07-15 20:23:51.568405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.293 [2024-07-15 20:23:51.579315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:54.293 [2024-07-15 20:23:51.580237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.293 [2024-07-15 20:23:51.580252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.293 [2024-07-15 20:23:51.591115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:54.293 [2024-07-15 20:23:51.592021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.293 [2024-07-15 20:23:51.592036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.293 [2024-07-15 20:23:51.602956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:54.293 [2024-07-15 20:23:51.603875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.293 [2024-07-15 20:23:51.603891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.293 [2024-07-15 20:23:51.614756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:54.293 [2024-07-15 20:23:51.615675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.293 [2024-07-15 20:23:51.615691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.294 [2024-07-15 20:23:51.626559] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:54.294 [2024-07-15 20:23:51.627467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.294 [2024-07-15 20:23:51.627482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.294 [2024-07-15 20:23:51.638373] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:54.294 [2024-07-15 20:23:51.639270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.294 [2024-07-15 20:23:51.639286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.294 [2024-07-15 20:23:51.650151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:54.294 [2024-07-15 20:23:51.651051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.294 [2024-07-15 20:23:51.651066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.294 [2024-07-15 20:23:51.661964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:54.294 [2024-07-15 20:23:51.662871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.294 [2024-07-15 20:23:51.662887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.294 [2024-07-15 20:23:51.673790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:54.294 [2024-07-15 20:23:51.674695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.294 [2024-07-15 20:23:51.674711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.294 [2024-07-15 20:23:51.685616] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:54.294 [2024-07-15 20:23:51.686539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.294 [2024-07-15 20:23:51.686555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.294 [2024-07-15 20:23:51.697419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:54.294 [2024-07-15 20:23:51.698301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.294 [2024-07-15 20:23:51.698316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.294 [2024-07-15 20:23:51.709231] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:54.294 [2024-07-15 20:23:51.710144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.294 [2024-07-15 20:23:51.710160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.294 [2024-07-15 20:23:51.721015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:54.294 [2024-07-15 20:23:51.721920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.294 [2024-07-15 20:23:51.721934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.554 [2024-07-15 20:23:51.732850] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:54.554 [2024-07-15 20:23:51.733748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.554 [2024-07-15 20:23:51.733763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.554 [2024-07-15 20:23:51.744640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:54.554 [2024-07-15 20:23:51.745541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.554 [2024-07-15 20:23:51.745555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.554 [2024-07-15 20:23:51.756479] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:54.554 [2024-07-15 20:23:51.757377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.554 [2024-07-15 20:23:51.757396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.554 [2024-07-15 20:23:51.768281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:54.554 [2024-07-15 20:23:51.769188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.554 [2024-07-15 20:23:51.769203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.554 [2024-07-15 20:23:51.780062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:54.554 [2024-07-15 20:23:51.780969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.554 [2024-07-15 20:23:51.780985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.554 [2024-07-15 20:23:51.791875] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:54.554 [2024-07-15 20:23:51.792778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.554 [2024-07-15 20:23:51.792793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.554 [2024-07-15 20:23:51.803681] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:54.554 [2024-07-15 20:23:51.804604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.554 [2024-07-15 20:23:51.804620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.554 [2024-07-15 20:23:51.815459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:54.554 [2024-07-15 20:23:51.816362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.554 [2024-07-15 20:23:51.816378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.554 [2024-07-15 20:23:51.827274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:54.554 [2024-07-15 20:23:51.828172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.554 [2024-07-15 20:23:51.828188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.554 [2024-07-15 20:23:51.839062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:54.554 [2024-07-15 20:23:51.839970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.554 [2024-07-15 20:23:51.839985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.554 [2024-07-15 20:23:51.850873] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:54.554 [2024-07-15 20:23:51.851794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.554 [2024-07-15 20:23:51.851810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.554 [2024-07-15 20:23:51.862684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:54.554 [2024-07-15 20:23:51.863598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.554 [2024-07-15 20:23:51.863616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.554 [2024-07-15 20:23:51.874517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:54.554 [2024-07-15 20:23:51.875421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.554 [2024-07-15 20:23:51.875436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.554 [2024-07-15 20:23:51.886342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:54.554 [2024-07-15 20:23:51.887241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.554 [2024-07-15 20:23:51.887256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.554 [2024-07-15 20:23:51.898154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:54.554 [2024-07-15 20:23:51.899062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.554 [2024-07-15 20:23:51.899077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.554 [2024-07-15 20:23:51.909949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190ed920 00:28:54.554 [2024-07-15 20:23:51.910847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.555 [2024-07-15 20:23:51.910863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.555 [2024-07-15 20:23:51.921778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f6020 00:28:54.555 [2024-07-15 20:23:51.922700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.555 [2024-07-15 20:23:51.922716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.555 [2024-07-15 20:23:51.933574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f7970 00:28:54.555 [2024-07-15 20:23:51.934477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.555 [2024-07-15 20:23:51.934493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.555 [2024-07-15 20:23:51.945369] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190f1430 00:28:54.555 [2024-07-15 20:23:51.946280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.555 [2024-07-15 20:23:51.946295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.555 [2024-07-15 20:23:51.957181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf1daa0) with pdu=0x2000190fc560 00:28:54.555 [2024-07-15 20:23:51.958083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.555 [2024-07-15 20:23:51.958099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:54.555 00:28:54.555 Latency(us) 00:28:54.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.555 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:54.555 nvme0n1 : 2.00 21539.94 84.14 0.00 0.00 5934.52 5215.57 13707.95 00:28:54.555 =================================================================================================================== 00:28:54.555 Total : 21539.94 84.14 0.00 0.00 5934.52 5215.57 13707.95 00:28:54.555 0 00:28:54.814 20:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:54.814 20:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:54.814 20:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:54.814 20:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:54.814 | .driver_specific 00:28:54.814 | .nvme_error 00:28:54.814 | .status_code 00:28:54.814 | .command_transient_transport_error' 00:28:54.814 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 169 > 0 )) 00:28:54.814 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1164165 00:28:54.814 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1164165 ']' 00:28:54.814 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1164165 00:28:54.814 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:54.814 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:54.814 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1164165 00:28:54.814 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:54.814 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:54.814 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1164165' 00:28:54.814 killing process with pid 1164165 00:28:54.814 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1164165 00:28:54.814 Received shutdown signal, test time was about 2.000000 seconds 00:28:54.814 00:28:54.814 Latency(us) 00:28:54.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.814 =================================================================================================================== 00:28:54.814 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:54.814 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1164165 00:28:55.074 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:55.074 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:55.074 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:55.074 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:55.074 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:55.074 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1164842 00:28:55.074 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1164842 /var/tmp/bperf.sock 00:28:55.074 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1164842 ']' 00:28:55.074 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:55.074 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:55.074 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:55.074 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:55.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:55.074 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:55.074 20:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:55.074 [2024-07-15 20:23:52.374559] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:28:55.074 [2024-07-15 20:23:52.374615] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1164842 ] 00:28:55.074 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:55.074 Zero copy mechanism will not be used. 00:28:55.074 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.074 [2024-07-15 20:23:52.448191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.074 [2024-07-15 20:23:52.501314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.827 20:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:55.827 20:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:55.827 20:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:55.827 20:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:56.086 20:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:56.086 20:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.086 20:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.086 20:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.086 20:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:56.086 20:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:56.086 nvme0n1 00:28:56.346 20:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:56.346 20:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.346 20:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.346 20:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.346 20:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:56.346 20:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:56.346 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:56.346 Zero copy mechanism will not be used. 00:28:56.346 Running I/O for 2 seconds... 00:28:56.346 [2024-07-15 20:23:53.653986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.346 [2024-07-15 20:23:53.654469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.346 [2024-07-15 20:23:53.654495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.346 [2024-07-15 20:23:53.668628] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.346 [2024-07-15 20:23:53.668937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.346 [2024-07-15 20:23:53.668956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.346 [2024-07-15 20:23:53.680948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.346 [2024-07-15 20:23:53.681275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.346 [2024-07-15 20:23:53.681293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.346 [2024-07-15 20:23:53.692223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.346 [2024-07-15 20:23:53.692451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.346 [2024-07-15 20:23:53.692467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.346 [2024-07-15 20:23:53.703592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.346 [2024-07-15 20:23:53.703718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.346 [2024-07-15 20:23:53.703732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.346 [2024-07-15 20:23:53.715524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.346 [2024-07-15 20:23:53.715762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.346 [2024-07-15 20:23:53.715787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.346 [2024-07-15 20:23:53.726377] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.346 [2024-07-15 20:23:53.726613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.346 [2024-07-15 20:23:53.726630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.346 [2024-07-15 20:23:53.737644] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.346 [2024-07-15 20:23:53.737994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.346 [2024-07-15 20:23:53.738010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.346 [2024-07-15 20:23:53.749142] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.346 [2024-07-15 20:23:53.749489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.346 [2024-07-15 20:23:53.749505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.346 [2024-07-15 20:23:53.760791] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.346 [2024-07-15 20:23:53.761128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.346 [2024-07-15 20:23:53.761148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.346 [2024-07-15 20:23:53.772925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.346 [2024-07-15 20:23:53.773165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.346 [2024-07-15 20:23:53.773181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.607 [2024-07-15 20:23:53.783936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.607 [2024-07-15 20:23:53.784267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.607 [2024-07-15 20:23:53.784284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.607 [2024-07-15 20:23:53.794987] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.607 [2024-07-15 20:23:53.795326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.607 [2024-07-15 20:23:53.795342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.607 [2024-07-15 20:23:53.805447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.607 [2024-07-15 20:23:53.805683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.607 [2024-07-15 20:23:53.805698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.607 [2024-07-15 20:23:53.816452] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.607 [2024-07-15 20:23:53.816686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.607 [2024-07-15 20:23:53.816701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.607 [2024-07-15 20:23:53.827287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.607 [2024-07-15 20:23:53.827622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.607 [2024-07-15 20:23:53.827638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.607 [2024-07-15 20:23:53.838548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.607 [2024-07-15 20:23:53.838860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.607 [2024-07-15 20:23:53.838876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.607 [2024-07-15 20:23:53.848239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.607 [2024-07-15 20:23:53.848610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.607 [2024-07-15 20:23:53.848626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.607 [2024-07-15 20:23:53.859362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.607 [2024-07-15 20:23:53.859773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.607 [2024-07-15 20:23:53.859789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.607 [2024-07-15 20:23:53.869879] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.607 [2024-07-15 20:23:53.870218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.607 [2024-07-15 20:23:53.870234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.607 [2024-07-15 20:23:53.879978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.607 [2024-07-15 20:23:53.880216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.608 [2024-07-15 20:23:53.880232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.608 [2024-07-15 20:23:53.890388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.608 [2024-07-15 20:23:53.890714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.608 [2024-07-15 20:23:53.890730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.608 [2024-07-15 20:23:53.901273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.608 [2024-07-15 20:23:53.901618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.608 [2024-07-15 20:23:53.901634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.608 [2024-07-15 20:23:53.912992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.608 [2024-07-15 20:23:53.913340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.608 [2024-07-15 20:23:53.913356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.608 [2024-07-15 20:23:53.924077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.608 [2024-07-15 20:23:53.924222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.608 [2024-07-15 20:23:53.924237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.608 [2024-07-15 20:23:53.935884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.608 [2024-07-15 20:23:53.936264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.608 [2024-07-15 20:23:53.936281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.608 [2024-07-15 20:23:53.947046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.608 [2024-07-15 20:23:53.947229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.608 [2024-07-15 20:23:53.947244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.608 [2024-07-15 20:23:53.958134] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.608 [2024-07-15 20:23:53.958310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.608 [2024-07-15 20:23:53.958325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.608 [2024-07-15 20:23:53.968175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.608 [2024-07-15 20:23:53.968410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.608 [2024-07-15 20:23:53.968426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.608 [2024-07-15 20:23:53.978581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.608 [2024-07-15 20:23:53.978909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.608 [2024-07-15 20:23:53.978925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.608 [2024-07-15 20:23:53.989545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.608 [2024-07-15 20:23:53.989879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.608 [2024-07-15 20:23:53.989896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.608 [2024-07-15 20:23:54.001305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.608 [2024-07-15 20:23:54.001634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.608 [2024-07-15 20:23:54.001651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.608 [2024-07-15 20:23:54.012609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.608 [2024-07-15 20:23:54.012945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.608 [2024-07-15 20:23:54.012961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.608 [2024-07-15 20:23:54.023662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.608 [2024-07-15 20:23:54.023971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.608 [2024-07-15 20:23:54.023987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.608 [2024-07-15 20:23:54.034715] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.608 [2024-07-15 20:23:54.034872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.608 [2024-07-15 20:23:54.034886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.868 [2024-07-15 20:23:54.046290] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.868 [2024-07-15 20:23:54.046638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.868 [2024-07-15 20:23:54.046657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.868 [2024-07-15 20:23:54.057765] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.868 [2024-07-15 20:23:54.057990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.868 [2024-07-15 20:23:54.058004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.868 [2024-07-15 20:23:54.069447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.868 [2024-07-15 20:23:54.069847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.868 [2024-07-15 20:23:54.069862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.868 [2024-07-15 20:23:54.080626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.868 [2024-07-15 20:23:54.080751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.868 [2024-07-15 20:23:54.080766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.092167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.092533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.092549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.103926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.104273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.104290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.113775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.114103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.114119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.123645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.123899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.123914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.134335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.134546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.134561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.144179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.144420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.144435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.153629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.153863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.153886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.163779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.164105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.164120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.175025] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.175294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.175310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.186410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.186644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.186659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.198298] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.198624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.198640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.209075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.209310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.209326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.220112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.220463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.220479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.230976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.231317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.231336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.243085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.243364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.243379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.254118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.254304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.254318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.264327] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.264440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.264455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.276270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.276611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.276627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.869 [2024-07-15 20:23:54.288370] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:56.869 [2024-07-15 20:23:54.288529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.869 [2024-07-15 20:23:54.288544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.300818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.301006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.301021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.312188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.312309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.312324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.324629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.324983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.324999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.336691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.336872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.336886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.349481] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.349840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.349856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.361513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.361802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.361818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.372563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.372831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.372847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.384836] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.385196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.385212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.395757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.395942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.395957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.407472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.407707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.407730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.419517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.419776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.419791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.430621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.430950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.430967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.441417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.441770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.441786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.452154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.452432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.452447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.463200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.463532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.463547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.474208] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.474532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.474549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.484840] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.485031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.485045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.495446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.495721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.495737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.505187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.505458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.505474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.515751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.516128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.516144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.526206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.526607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.526625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.536617] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.537038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.537055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.547664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.548133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.548149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.131 [2024-07-15 20:23:54.558332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.131 [2024-07-15 20:23:54.558555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.131 [2024-07-15 20:23:54.558570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.392 [2024-07-15 20:23:54.568355] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.392 [2024-07-15 20:23:54.568549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.392 [2024-07-15 20:23:54.568564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.392 [2024-07-15 20:23:54.577803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.392 [2024-07-15 20:23:54.577984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.392 [2024-07-15 20:23:54.577999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.392 [2024-07-15 20:23:54.588006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.392 [2024-07-15 20:23:54.588397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.392 [2024-07-15 20:23:54.588413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.392 [2024-07-15 20:23:54.599581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.392 [2024-07-15 20:23:54.599907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.392 [2024-07-15 20:23:54.599923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.392 [2024-07-15 20:23:54.611141] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.392 [2024-07-15 20:23:54.611304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.392 [2024-07-15 20:23:54.611319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.621953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.622386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.622402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.632769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.633232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.633247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.643430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.643783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.643799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.654378] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.654562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.654577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.664394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.664719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.664735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.675446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.675695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.675711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.687410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.687737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.687754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.697793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.698045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.698068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.707420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.707677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.707693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.717394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.717688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.717704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.727483] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.727823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.727839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.736718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.737002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.737018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.747373] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.747657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.747672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.758064] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.758419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.758436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.768732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.769150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.769166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.780507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.780694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.780709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.791786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.792069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.792085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.801881] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.802156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.802181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.393 [2024-07-15 20:23:54.812903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.393 [2024-07-15 20:23:54.813149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.393 [2024-07-15 20:23:54.813164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.654 [2024-07-15 20:23:54.823992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.654 [2024-07-15 20:23:54.824262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.654 [2024-07-15 20:23:54.824285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.654 [2024-07-15 20:23:54.834745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.654 [2024-07-15 20:23:54.835015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.654 [2024-07-15 20:23:54.835030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.654 [2024-07-15 20:23:54.845376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.654 [2024-07-15 20:23:54.845808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.654 [2024-07-15 20:23:54.845823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.654 [2024-07-15 20:23:54.855859] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.654 [2024-07-15 20:23:54.856127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.654 [2024-07-15 20:23:54.856143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.654 [2024-07-15 20:23:54.866901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.654 [2024-07-15 20:23:54.867224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.654 [2024-07-15 20:23:54.867240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.654 [2024-07-15 20:23:54.877868] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:54.878215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:54.878230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:54.889654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:54.890050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:54.890067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:54.901198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:54.901451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:54.901466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:54.912843] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:54.913215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:54.913230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:54.923833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:54.924139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:54.924156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:54.934602] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:54.934783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:54.934798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:54.945745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:54.946213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:54.946229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:54.957135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:54.957587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:54.957603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:54.969089] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:54.969280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:54.969295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:54.979449] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:54.979744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:54.979761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:54.988982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:54.989217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:54.989233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:54.999986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:55.000248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:55.000264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:55.009912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:55.010140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:55.010156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:55.020066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:55.020354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:55.020378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:55.030986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:55.031232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:55.031248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:55.041071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:55.041351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:55.041367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:55.051223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:55.051594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:55.051610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:55.060588] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:55.060775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:55.060790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:55.069751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:55.070095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:55.070111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.655 [2024-07-15 20:23:55.079111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.655 [2024-07-15 20:23:55.079420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.655 [2024-07-15 20:23:55.079437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.917 [2024-07-15 20:23:55.088344] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.917 [2024-07-15 20:23:55.088695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.917 [2024-07-15 20:23:55.088711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.917 [2024-07-15 20:23:55.098097] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.917 [2024-07-15 20:23:55.098320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.917 [2024-07-15 20:23:55.098335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.917 [2024-07-15 20:23:55.107307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.917 [2024-07-15 20:23:55.107559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.917 [2024-07-15 20:23:55.107575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.917 [2024-07-15 20:23:55.116196] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.917 [2024-07-15 20:23:55.116485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.116501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.125645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.125869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.125884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.136373] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.136662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.136679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.145895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.146168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.146183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.155716] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.156082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.156098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.165338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.165646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.165663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.174870] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.175115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.175136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.184500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.184845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.184861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.193916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.194250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.194266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.204893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.205173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.205189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.215076] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.215576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.215592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.226049] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.226500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.226516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.236738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.237193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.237210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.247433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.247955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.247974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.257922] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.258104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.258119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.268233] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.268587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.268604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.280151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.280666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.280683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.290902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.291260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.291276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.300128] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.300312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.300327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.309742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.310042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.310059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.918 [2024-07-15 20:23:55.320208] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.918 [2024-07-15 20:23:55.320546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.918 [2024-07-15 20:23:55.320562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.919 [2024-07-15 20:23:55.329900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.919 [2024-07-15 20:23:55.330241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.919 [2024-07-15 20:23:55.330258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.919 [2024-07-15 20:23:55.339331] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:57.919 [2024-07-15 20:23:55.339585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.919 [2024-07-15 20:23:55.339602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.349135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.349409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.349425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.359843] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.360103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.360118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.370281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.370620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.370636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.380535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.380745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.380760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.390537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.390730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.390745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.400604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.400891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.400907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.409986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.410253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.410268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.420223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.420575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.420591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.429942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.430127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.430142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.437720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.438080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.438096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.446780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.447116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.447136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.457384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.457657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.457672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.467213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.467435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.467450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.477715] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.478031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.478047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.487220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.487773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.487788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.498037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.498320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.498336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.507722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.507971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.507990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.518649] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.519019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.519034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.529288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.529621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.529637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.540368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.540666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.540682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.551946] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.552207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.552224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.562399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.562740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.562756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.572846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.573181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.573197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.583570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.583891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.181 [2024-07-15 20:23:55.583907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.181 [2024-07-15 20:23:55.594525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.181 [2024-07-15 20:23:55.594766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.182 [2024-07-15 20:23:55.594781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.182 [2024-07-15 20:23:55.603754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.182 [2024-07-15 20:23:55.603980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.182 [2024-07-15 20:23:55.603995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.443 [2024-07-15 20:23:55.613317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.443 [2024-07-15 20:23:55.613497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.443 [2024-07-15 20:23:55.613512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.443 [2024-07-15 20:23:55.622661] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.443 [2024-07-15 20:23:55.622848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.443 [2024-07-15 20:23:55.622863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.443 [2024-07-15 20:23:55.632729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1012ca0) with pdu=0x2000190fef90 00:28:58.443 [2024-07-15 20:23:55.632880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.443 [2024-07-15 20:23:55.632895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.443 00:28:58.443 Latency(us) 00:28:58.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.443 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:58.443 nvme0n1 : 2.00 2880.12 360.01 0.00 0.00 5546.70 3167.57 19333.12 00:28:58.443 =================================================================================================================== 00:28:58.443 Total : 2880.12 360.01 0.00 0.00 5546.70 3167.57 19333.12 00:28:58.443 0 00:28:58.443 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:58.443 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:58.443 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:58.443 | .driver_specific 00:28:58.443 | .nvme_error 00:28:58.443 | .status_code 00:28:58.443 | .command_transient_transport_error' 00:28:58.443 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:58.443 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 186 > 0 )) 00:28:58.443 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1164842 00:28:58.443 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1164842 ']' 00:28:58.443 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1164842 00:28:58.443 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:58.443 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:58.443 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1164842 00:28:58.704 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:58.704 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:58.704 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1164842' 00:28:58.704 killing process with pid 1164842 00:28:58.704 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1164842 00:28:58.704 Received shutdown signal, test time was about 2.000000 seconds 00:28:58.704 00:28:58.704 Latency(us) 00:28:58.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.704 =================================================================================================================== 00:28:58.704 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:58.704 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1164842 00:28:58.704 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1162440 00:28:58.704 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1162440 ']' 00:28:58.704 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1162440 00:28:58.704 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:58.704 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:58.704 20:23:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1162440 00:28:58.704 20:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:58.704 20:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:58.704 20:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1162440' 00:28:58.704 killing process with pid 1162440 00:28:58.704 20:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1162440 00:28:58.704 20:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1162440 00:28:58.966 00:28:58.966 real 0m15.876s 00:28:58.966 user 0m31.222s 00:28:58.966 sys 0m3.121s 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.966 ************************************ 00:28:58.966 END TEST nvmf_digest_error 00:28:58.966 ************************************ 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:58.966 rmmod nvme_tcp 00:28:58.966 rmmod nvme_fabrics 00:28:58.966 rmmod nvme_keyring 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1162440 ']' 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1162440 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1162440 ']' 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1162440 00:28:58.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1162440) - No such process 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1162440 is not found' 00:28:58.966 Process with pid 1162440 is not found 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:58.966 20:23:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.550 20:23:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:01.550 00:29:01.550 real 0m41.687s 00:29:01.550 user 1m5.015s 00:29:01.550 sys 0m11.722s 00:29:01.550 20:23:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:01.550 20:23:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:01.550 ************************************ 00:29:01.550 END TEST nvmf_digest 00:29:01.550 ************************************ 00:29:01.550 20:23:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:01.550 20:23:58 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:29:01.550 20:23:58 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:29:01.550 20:23:58 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:29:01.550 20:23:58 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:01.550 20:23:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:01.550 20:23:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.550 20:23:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:01.550 ************************************ 00:29:01.550 START TEST nvmf_bdevperf 00:29:01.550 ************************************ 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:01.550 * Looking for test storage... 00:29:01.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:01.550 20:23:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:08.145 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:08.145 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:08.145 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:08.145 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:08.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:29:08.145 00:29:08.145 --- 10.0.0.2 ping statistics --- 00:29:08.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.145 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:08.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.384 ms 00:29:08.145 00:29:08.145 --- 10.0.0.1 ping statistics --- 00:29:08.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.145 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:29:08.145 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1169660 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1169660 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1169660 ']' 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:08.146 20:24:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.408 [2024-07-15 20:24:05.617058] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:29:08.408 [2024-07-15 20:24:05.617130] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.408 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.408 [2024-07-15 20:24:05.705262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:08.408 [2024-07-15 20:24:05.801050] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.408 [2024-07-15 20:24:05.801107] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.408 [2024-07-15 20:24:05.801116] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.408 [2024-07-15 20:24:05.801132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.408 [2024-07-15 20:24:05.801142] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.408 [2024-07-15 20:24:05.801284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:08.408 [2024-07-15 20:24:05.801570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:08.408 [2024-07-15 20:24:05.801571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.980 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:08.980 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:08.980 20:24:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:08.980 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:08.980 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:09.240 [2024-07-15 20:24:06.451372] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:09.240 Malloc0 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:09.240 [2024-07-15 20:24:06.519510] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:09.240 { 00:29:09.240 "params": { 00:29:09.240 "name": "Nvme$subsystem", 00:29:09.240 "trtype": "$TEST_TRANSPORT", 00:29:09.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:09.240 "adrfam": "ipv4", 00:29:09.240 "trsvcid": "$NVMF_PORT", 00:29:09.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:09.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:09.240 "hdgst": ${hdgst:-false}, 00:29:09.240 "ddgst": ${ddgst:-false} 00:29:09.240 }, 00:29:09.240 "method": "bdev_nvme_attach_controller" 00:29:09.240 } 00:29:09.240 EOF 00:29:09.240 )") 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:09.240 20:24:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:09.240 "params": { 00:29:09.240 "name": "Nvme1", 00:29:09.240 "trtype": "tcp", 00:29:09.240 "traddr": "10.0.0.2", 00:29:09.240 "adrfam": "ipv4", 00:29:09.240 "trsvcid": "4420", 00:29:09.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:09.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:09.240 "hdgst": false, 00:29:09.240 "ddgst": false 00:29:09.240 }, 00:29:09.240 "method": "bdev_nvme_attach_controller" 00:29:09.240 }' 00:29:09.240 [2024-07-15 20:24:06.573501] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:29:09.240 [2024-07-15 20:24:06.573547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169989 ] 00:29:09.240 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.240 [2024-07-15 20:24:06.631347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.500 [2024-07-15 20:24:06.695592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.500 Running I/O for 1 seconds... 00:29:10.440 00:29:10.440 Latency(us) 00:29:10.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.440 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:10.440 Verification LBA range: start 0x0 length 0x4000 00:29:10.440 Nvme1n1 : 1.01 8947.11 34.95 0.00 0.00 14227.69 2034.35 16384.00 00:29:10.440 =================================================================================================================== 00:29:10.440 Total : 8947.11 34.95 0.00 0.00 14227.69 2034.35 16384.00 00:29:10.700 20:24:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1170320 00:29:10.700 20:24:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:10.700 20:24:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:10.700 20:24:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:10.700 20:24:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:10.700 20:24:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:10.700 20:24:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:10.700 20:24:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:10.700 { 00:29:10.700 "params": { 00:29:10.700 "name": "Nvme$subsystem", 00:29:10.700 "trtype": "$TEST_TRANSPORT", 00:29:10.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.700 "adrfam": "ipv4", 00:29:10.700 "trsvcid": "$NVMF_PORT", 00:29:10.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.700 "hdgst": ${hdgst:-false}, 00:29:10.700 "ddgst": ${ddgst:-false} 00:29:10.700 }, 00:29:10.700 "method": "bdev_nvme_attach_controller" 00:29:10.700 } 00:29:10.700 EOF 00:29:10.700 )") 00:29:10.700 20:24:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:10.700 20:24:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:10.700 20:24:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:10.700 20:24:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:10.700 "params": { 00:29:10.700 "name": "Nvme1", 00:29:10.700 "trtype": "tcp", 00:29:10.700 "traddr": "10.0.0.2", 00:29:10.700 "adrfam": "ipv4", 00:29:10.700 "trsvcid": "4420", 00:29:10.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:10.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:10.700 "hdgst": false, 00:29:10.700 "ddgst": false 00:29:10.700 }, 00:29:10.700 "method": "bdev_nvme_attach_controller" 00:29:10.700 }' 00:29:10.700 [2024-07-15 20:24:08.040077] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:29:10.700 [2024-07-15 20:24:08.040147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170320 ] 00:29:10.700 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.700 [2024-07-15 20:24:08.100311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.959 [2024-07-15 20:24:08.164155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.218 Running I/O for 15 seconds... 00:29:13.763 20:24:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1169660 00:29:13.763 20:24:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:13.763 [2024-07-15 20:24:10.997312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.763 [2024-07-15 20:24:10.997355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.763 [2024-07-15 20:24:10.997377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.763 [2024-07-15 20:24:10.997387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.763 [2024-07-15 20:24:10.997398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.763 [2024-07-15 20:24:10.997408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.763 [2024-07-15 20:24:10.997417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.763 [2024-07-15 20:24:10.997425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.763 [2024-07-15 20:24:10.997434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.763 [2024-07-15 20:24:10.997441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.763 [2024-07-15 20:24:10.997451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.763 [2024-07-15 20:24:10.997458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.763 [2024-07-15 20:24:10.997470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.763 [2024-07-15 20:24:10.997479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.763 [2024-07-15 20:24:10.997489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.763 [2024-07-15 20:24:10.997499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.763 [2024-07-15 20:24:10.997510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.763 [2024-07-15 20:24:10.997519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.763 [2024-07-15 20:24:10.997530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.763 [2024-07-15 20:24:10.997539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.763 [2024-07-15 20:24:10.997549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.763 [2024-07-15 20:24:10.997558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.763 [2024-07-15 20:24:10.997569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.763 [2024-07-15 20:24:10.997582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.763 [2024-07-15 20:24:10.997592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.763 [2024-07-15 20:24:10.997601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.763 [2024-07-15 20:24:10.997612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.763 [2024-07-15 20:24:10.997619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.763 [2024-07-15 20:24:10.997629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.763 [2024-07-15 20:24:10.997635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.763 [2024-07-15 20:24:10.997645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.763 [2024-07-15 20:24:10.997652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.763 [2024-07-15 20:24:10.997661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.763 [2024-07-15 20:24:10.997668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.763 [2024-07-15 20:24:10.997678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.763 [2024-07-15 20:24:10.997685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.763 [2024-07-15 20:24:10.997695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.997711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.997728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.997744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.997761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.997777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.997796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.997812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.997828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.997844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.997860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.997876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.997892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.997908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.997924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.997941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.997957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.997973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.997989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.997997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.764 [2024-07-15 20:24:10.998388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.764 [2024-07-15 20:24:10.998395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.765 [2024-07-15 20:24:10.998799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.765 [2024-07-15 20:24:10.998816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.765 [2024-07-15 20:24:10.998832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.765 [2024-07-15 20:24:10.998850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.765 [2024-07-15 20:24:10.998867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.765 [2024-07-15 20:24:10.998883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.765 [2024-07-15 20:24:10.998898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.998991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.998998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.999008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.999015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.999025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.999033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.999042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.999050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.999060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.999067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.999077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.999084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.999093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.999101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.765 [2024-07-15 20:24:10.999110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.765 [2024-07-15 20:24:10.999117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.766 [2024-07-15 20:24:10.999216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.766 [2024-07-15 20:24:10.999233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.766 [2024-07-15 20:24:10.999249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.766 [2024-07-15 20:24:10.999265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.766 [2024-07-15 20:24:10.999281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.766 [2024-07-15 20:24:10.999297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.766 [2024-07-15 20:24:10.999314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.766 [2024-07-15 20:24:10.999330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.766 [2024-07-15 20:24:10.999349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.766 [2024-07-15 20:24:10.999365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.766 [2024-07-15 20:24:10.999381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.766 [2024-07-15 20:24:10.999398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.766 [2024-07-15 20:24:10.999414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.766 [2024-07-15 20:24:10.999430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.766 [2024-07-15 20:24:10.999447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.766 [2024-07-15 20:24:10.999463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.766 [2024-07-15 20:24:10.999479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.766 [2024-07-15 20:24:10.999499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.766 [2024-07-15 20:24:10.999516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.766 [2024-07-15 20:24:10.999532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.766 [2024-07-15 20:24:10.999550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.766 [2024-07-15 20:24:10.999566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.766 [2024-07-15 20:24:10.999583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf20a00 is same with the state(5) to be set 00:29:13.766 [2024-07-15 20:24:10.999599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:13.766 [2024-07-15 20:24:10.999605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:13.766 [2024-07-15 20:24:10.999612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113560 len:8 PRP1 0x0 PRP2 0x0 00:29:13.766 [2024-07-15 20:24:10.999619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.766 [2024-07-15 20:24:10.999658] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf20a00 was disconnected and freed. reset controller. 00:29:13.766 [2024-07-15 20:24:11.003250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.766 [2024-07-15 20:24:11.003296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:13.766 [2024-07-15 20:24:11.004182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.766 [2024-07-15 20:24:11.004208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:13.766 [2024-07-15 20:24:11.004216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:13.766 [2024-07-15 20:24:11.004441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:13.766 [2024-07-15 20:24:11.004661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.766 [2024-07-15 20:24:11.004669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.766 [2024-07-15 20:24:11.004677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.766 [2024-07-15 20:24:11.008177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.766 [2024-07-15 20:24:11.017275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.766 [2024-07-15 20:24:11.017905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.766 [2024-07-15 20:24:11.017921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:13.766 [2024-07-15 20:24:11.017929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:13.766 [2024-07-15 20:24:11.018152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:13.766 [2024-07-15 20:24:11.018370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.766 [2024-07-15 20:24:11.018378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.766 [2024-07-15 20:24:11.018385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.766 [2024-07-15 20:24:11.021890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.766 [2024-07-15 20:24:11.031188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.766 [2024-07-15 20:24:11.031816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.766 [2024-07-15 20:24:11.031832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:13.766 [2024-07-15 20:24:11.031840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:13.766 [2024-07-15 20:24:11.032055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:13.766 [2024-07-15 20:24:11.032277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.766 [2024-07-15 20:24:11.032286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.766 [2024-07-15 20:24:11.032292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.766 [2024-07-15 20:24:11.035787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.766 [2024-07-15 20:24:11.045089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.766 [2024-07-15 20:24:11.045759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.766 [2024-07-15 20:24:11.045777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:13.766 [2024-07-15 20:24:11.045785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:13.766 [2024-07-15 20:24:11.046002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:13.766 [2024-07-15 20:24:11.046224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.766 [2024-07-15 20:24:11.046233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.766 [2024-07-15 20:24:11.046240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.766 [2024-07-15 20:24:11.049737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.766 [2024-07-15 20:24:11.059019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.766 [2024-07-15 20:24:11.059651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.767 [2024-07-15 20:24:11.059667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:13.767 [2024-07-15 20:24:11.059674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:13.767 [2024-07-15 20:24:11.059890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:13.767 [2024-07-15 20:24:11.060106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.767 [2024-07-15 20:24:11.060115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.767 [2024-07-15 20:24:11.060128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.767 [2024-07-15 20:24:11.063627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.767 [2024-07-15 20:24:11.072915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.767 [2024-07-15 20:24:11.073553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.767 [2024-07-15 20:24:11.073569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:13.767 [2024-07-15 20:24:11.073583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:13.767 [2024-07-15 20:24:11.073799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:13.767 [2024-07-15 20:24:11.074014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.767 [2024-07-15 20:24:11.074022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.767 [2024-07-15 20:24:11.074030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.767 [2024-07-15 20:24:11.077528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.767 [2024-07-15 20:24:11.086811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.767 [2024-07-15 20:24:11.087424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.767 [2024-07-15 20:24:11.087440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:13.767 [2024-07-15 20:24:11.087447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:13.767 [2024-07-15 20:24:11.087662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:13.767 [2024-07-15 20:24:11.087878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.767 [2024-07-15 20:24:11.087886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.767 [2024-07-15 20:24:11.087893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.767 [2024-07-15 20:24:11.091393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.767 [2024-07-15 20:24:11.100673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.767 [2024-07-15 20:24:11.101321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.767 [2024-07-15 20:24:11.101337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:13.767 [2024-07-15 20:24:11.101344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:13.767 [2024-07-15 20:24:11.101560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:13.767 [2024-07-15 20:24:11.101775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.767 [2024-07-15 20:24:11.101783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.767 [2024-07-15 20:24:11.101790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.767 [2024-07-15 20:24:11.105289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.767 [2024-07-15 20:24:11.114566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.767 [2024-07-15 20:24:11.115231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.767 [2024-07-15 20:24:11.115246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:13.767 [2024-07-15 20:24:11.115253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:13.767 [2024-07-15 20:24:11.115469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:13.767 [2024-07-15 20:24:11.115685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.767 [2024-07-15 20:24:11.115696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.767 [2024-07-15 20:24:11.115703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.767 [2024-07-15 20:24:11.119202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.767 [2024-07-15 20:24:11.128480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.767 [2024-07-15 20:24:11.129099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.767 [2024-07-15 20:24:11.129114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:13.767 [2024-07-15 20:24:11.129127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:13.767 [2024-07-15 20:24:11.129343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:13.767 [2024-07-15 20:24:11.129559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.767 [2024-07-15 20:24:11.129566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.767 [2024-07-15 20:24:11.129573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.767 [2024-07-15 20:24:11.133069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.767 [2024-07-15 20:24:11.142349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.767 [2024-07-15 20:24:11.143007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.767 [2024-07-15 20:24:11.143022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:13.767 [2024-07-15 20:24:11.143029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:13.767 [2024-07-15 20:24:11.143252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:13.767 [2024-07-15 20:24:11.143468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.767 [2024-07-15 20:24:11.143476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.767 [2024-07-15 20:24:11.143483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.767 [2024-07-15 20:24:11.146991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.767 [2024-07-15 20:24:11.156283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.767 [2024-07-15 20:24:11.156943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.767 [2024-07-15 20:24:11.156958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:13.767 [2024-07-15 20:24:11.156965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:13.767 [2024-07-15 20:24:11.157187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:13.767 [2024-07-15 20:24:11.157403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.767 [2024-07-15 20:24:11.157411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.767 [2024-07-15 20:24:11.157417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.767 [2024-07-15 20:24:11.160914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.767 [2024-07-15 20:24:11.170206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.767 [2024-07-15 20:24:11.170825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.767 [2024-07-15 20:24:11.170839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:13.767 [2024-07-15 20:24:11.170847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:13.767 [2024-07-15 20:24:11.171062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:13.767 [2024-07-15 20:24:11.171285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.767 [2024-07-15 20:24:11.171295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.767 [2024-07-15 20:24:11.171302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.767 [2024-07-15 20:24:11.174798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.767 [2024-07-15 20:24:11.184086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.767 [2024-07-15 20:24:11.184758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.768 [2024-07-15 20:24:11.184773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:13.768 [2024-07-15 20:24:11.184781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:13.768 [2024-07-15 20:24:11.184996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:13.768 [2024-07-15 20:24:11.185218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.768 [2024-07-15 20:24:11.185228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.768 [2024-07-15 20:24:11.185234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.768 [2024-07-15 20:24:11.188727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.031 [2024-07-15 20:24:11.198007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.031 [2024-07-15 20:24:11.198626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.031 [2024-07-15 20:24:11.198641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.031 [2024-07-15 20:24:11.198648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.031 [2024-07-15 20:24:11.198864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.031 [2024-07-15 20:24:11.199080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.031 [2024-07-15 20:24:11.199087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.031 [2024-07-15 20:24:11.199094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.031 [2024-07-15 20:24:11.202596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.031 [2024-07-15 20:24:11.211875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.031 [2024-07-15 20:24:11.212490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.031 [2024-07-15 20:24:11.212506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.031 [2024-07-15 20:24:11.212513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.031 [2024-07-15 20:24:11.212732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.031 [2024-07-15 20:24:11.212948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.031 [2024-07-15 20:24:11.212955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.031 [2024-07-15 20:24:11.212962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.031 [2024-07-15 20:24:11.216460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.031 [2024-07-15 20:24:11.225740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.031 [2024-07-15 20:24:11.226374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.031 [2024-07-15 20:24:11.226389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.031 [2024-07-15 20:24:11.226397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.031 [2024-07-15 20:24:11.226612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.031 [2024-07-15 20:24:11.226828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.031 [2024-07-15 20:24:11.226835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.031 [2024-07-15 20:24:11.226842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.031 [2024-07-15 20:24:11.230343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.031 [2024-07-15 20:24:11.239621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.031 [2024-07-15 20:24:11.240367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.031 [2024-07-15 20:24:11.240405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.031 [2024-07-15 20:24:11.240416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.031 [2024-07-15 20:24:11.240658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.031 [2024-07-15 20:24:11.240879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.031 [2024-07-15 20:24:11.240887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.031 [2024-07-15 20:24:11.240894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.031 [2024-07-15 20:24:11.244405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.031 [2024-07-15 20:24:11.253495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.031 [2024-07-15 20:24:11.254043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.031 [2024-07-15 20:24:11.254060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.031 [2024-07-15 20:24:11.254068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.031 [2024-07-15 20:24:11.254290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.031 [2024-07-15 20:24:11.254507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.031 [2024-07-15 20:24:11.254515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.031 [2024-07-15 20:24:11.254527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.031 [2024-07-15 20:24:11.258027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.031 [2024-07-15 20:24:11.267323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.031 [2024-07-15 20:24:11.267958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.032 [2024-07-15 20:24:11.267973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.032 [2024-07-15 20:24:11.267980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.032 [2024-07-15 20:24:11.268202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.032 [2024-07-15 20:24:11.268419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.032 [2024-07-15 20:24:11.268427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.032 [2024-07-15 20:24:11.268434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.032 [2024-07-15 20:24:11.271932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.032 [2024-07-15 20:24:11.281219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.032 [2024-07-15 20:24:11.281748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.032 [2024-07-15 20:24:11.281764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.032 [2024-07-15 20:24:11.281771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.032 [2024-07-15 20:24:11.281987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.032 [2024-07-15 20:24:11.282208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.032 [2024-07-15 20:24:11.282216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.032 [2024-07-15 20:24:11.282223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.032 [2024-07-15 20:24:11.285718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.032 [2024-07-15 20:24:11.295002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.032 [2024-07-15 20:24:11.295640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.032 [2024-07-15 20:24:11.295656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.032 [2024-07-15 20:24:11.295663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.032 [2024-07-15 20:24:11.295878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.032 [2024-07-15 20:24:11.296094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.032 [2024-07-15 20:24:11.296102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.032 [2024-07-15 20:24:11.296109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.032 [2024-07-15 20:24:11.299611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.032 [2024-07-15 20:24:11.308896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.032 [2024-07-15 20:24:11.309443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.032 [2024-07-15 20:24:11.309461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.032 [2024-07-15 20:24:11.309469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.032 [2024-07-15 20:24:11.309684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.032 [2024-07-15 20:24:11.309900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.032 [2024-07-15 20:24:11.309907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.032 [2024-07-15 20:24:11.309914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.032 [2024-07-15 20:24:11.313415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.032 [2024-07-15 20:24:11.322700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.032 [2024-07-15 20:24:11.323368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.032 [2024-07-15 20:24:11.323384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.032 [2024-07-15 20:24:11.323391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.032 [2024-07-15 20:24:11.323607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.032 [2024-07-15 20:24:11.323822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.032 [2024-07-15 20:24:11.323830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.032 [2024-07-15 20:24:11.323837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.032 [2024-07-15 20:24:11.327340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.032 [2024-07-15 20:24:11.336625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.032 [2024-07-15 20:24:11.337250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.032 [2024-07-15 20:24:11.337265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.032 [2024-07-15 20:24:11.337272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.032 [2024-07-15 20:24:11.337488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.032 [2024-07-15 20:24:11.337703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.032 [2024-07-15 20:24:11.337711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.032 [2024-07-15 20:24:11.337718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.032 [2024-07-15 20:24:11.341219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.032 [2024-07-15 20:24:11.350510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.032 [2024-07-15 20:24:11.351132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.032 [2024-07-15 20:24:11.351148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.032 [2024-07-15 20:24:11.351155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.032 [2024-07-15 20:24:11.351371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.032 [2024-07-15 20:24:11.351591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.032 [2024-07-15 20:24:11.351599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.032 [2024-07-15 20:24:11.351606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.032 [2024-07-15 20:24:11.355100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.032 [2024-07-15 20:24:11.364385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.032 [2024-07-15 20:24:11.364994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.032 [2024-07-15 20:24:11.365008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.032 [2024-07-15 20:24:11.365015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.032 [2024-07-15 20:24:11.365237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.032 [2024-07-15 20:24:11.365453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.032 [2024-07-15 20:24:11.365461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.032 [2024-07-15 20:24:11.365468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.032 [2024-07-15 20:24:11.368965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.032 [2024-07-15 20:24:11.378253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.032 [2024-07-15 20:24:11.378784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.032 [2024-07-15 20:24:11.378798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.032 [2024-07-15 20:24:11.378805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.032 [2024-07-15 20:24:11.379020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.032 [2024-07-15 20:24:11.379242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.032 [2024-07-15 20:24:11.379250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.032 [2024-07-15 20:24:11.379258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.032 [2024-07-15 20:24:11.382753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.032 [2024-07-15 20:24:11.392031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.032 [2024-07-15 20:24:11.392664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.032 [2024-07-15 20:24:11.392679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.032 [2024-07-15 20:24:11.392686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.032 [2024-07-15 20:24:11.392901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.032 [2024-07-15 20:24:11.393117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.032 [2024-07-15 20:24:11.393130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.032 [2024-07-15 20:24:11.393137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.032 [2024-07-15 20:24:11.396634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.032 [2024-07-15 20:24:11.405915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.032 [2024-07-15 20:24:11.406448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.032 [2024-07-15 20:24:11.406463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.032 [2024-07-15 20:24:11.406470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.032 [2024-07-15 20:24:11.406685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.032 [2024-07-15 20:24:11.406901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.032 [2024-07-15 20:24:11.406909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.032 [2024-07-15 20:24:11.406916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.032 [2024-07-15 20:24:11.410417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.032 [2024-07-15 20:24:11.419700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.032 [2024-07-15 20:24:11.420365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.033 [2024-07-15 20:24:11.420380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.033 [2024-07-15 20:24:11.420387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.033 [2024-07-15 20:24:11.420603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.033 [2024-07-15 20:24:11.420818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.033 [2024-07-15 20:24:11.420826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.033 [2024-07-15 20:24:11.420832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.033 [2024-07-15 20:24:11.424330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.033 [2024-07-15 20:24:11.433609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.033 [2024-07-15 20:24:11.434166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.033 [2024-07-15 20:24:11.434181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.033 [2024-07-15 20:24:11.434188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.033 [2024-07-15 20:24:11.434404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.033 [2024-07-15 20:24:11.434619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.033 [2024-07-15 20:24:11.434628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.033 [2024-07-15 20:24:11.434635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.033 [2024-07-15 20:24:11.438134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.033 [2024-07-15 20:24:11.447421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.033 [2024-07-15 20:24:11.447879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.033 [2024-07-15 20:24:11.447893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.033 [2024-07-15 20:24:11.447904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.033 [2024-07-15 20:24:11.448120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.033 [2024-07-15 20:24:11.448342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.033 [2024-07-15 20:24:11.448350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.033 [2024-07-15 20:24:11.448357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.033 [2024-07-15 20:24:11.451852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.033 [2024-07-15 20:24:11.461348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.295 [2024-07-15 20:24:11.461871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:24:11.461886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.295 [2024-07-15 20:24:11.461894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.295 [2024-07-15 20:24:11.462109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.295 [2024-07-15 20:24:11.462330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.295 [2024-07-15 20:24:11.462338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.295 [2024-07-15 20:24:11.462345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.295 [2024-07-15 20:24:11.465984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.295 [2024-07-15 20:24:11.475275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.295 [2024-07-15 20:24:11.475896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:24:11.475911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.295 [2024-07-15 20:24:11.475918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.295 [2024-07-15 20:24:11.476140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.295 [2024-07-15 20:24:11.476356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.295 [2024-07-15 20:24:11.476364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.295 [2024-07-15 20:24:11.476371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.295 [2024-07-15 20:24:11.479864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.295 [2024-07-15 20:24:11.489147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.295 [2024-07-15 20:24:11.489757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:24:11.489772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.295 [2024-07-15 20:24:11.489779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.295 [2024-07-15 20:24:11.489994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.295 [2024-07-15 20:24:11.490216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.295 [2024-07-15 20:24:11.490228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.295 [2024-07-15 20:24:11.490235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.295 [2024-07-15 20:24:11.493729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.295 [2024-07-15 20:24:11.503011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.295 [2024-07-15 20:24:11.503558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.296 [2024-07-15 20:24:11.503572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.296 [2024-07-15 20:24:11.503579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.296 [2024-07-15 20:24:11.503795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.296 [2024-07-15 20:24:11.504010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.296 [2024-07-15 20:24:11.504027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.296 [2024-07-15 20:24:11.504034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.296 [2024-07-15 20:24:11.507534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.296 [2024-07-15 20:24:11.516817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.296 [2024-07-15 20:24:11.517369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.296 [2024-07-15 20:24:11.517384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.296 [2024-07-15 20:24:11.517391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.296 [2024-07-15 20:24:11.517607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.296 [2024-07-15 20:24:11.517822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.296 [2024-07-15 20:24:11.517829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.296 [2024-07-15 20:24:11.517836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.296 [2024-07-15 20:24:11.521334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.296 [2024-07-15 20:24:11.530623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.296 [2024-07-15 20:24:11.531160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.296 [2024-07-15 20:24:11.531175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.296 [2024-07-15 20:24:11.531182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.296 [2024-07-15 20:24:11.531398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.296 [2024-07-15 20:24:11.531613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.296 [2024-07-15 20:24:11.531620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.296 [2024-07-15 20:24:11.531627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.296 [2024-07-15 20:24:11.535128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.296 [2024-07-15 20:24:11.544412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.296 [2024-07-15 20:24:11.545041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.296 [2024-07-15 20:24:11.545055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.296 [2024-07-15 20:24:11.545062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.296 [2024-07-15 20:24:11.545290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.296 [2024-07-15 20:24:11.545507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.296 [2024-07-15 20:24:11.545515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.296 [2024-07-15 20:24:11.545521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.296 [2024-07-15 20:24:11.549014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.296 [2024-07-15 20:24:11.558295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.296 [2024-07-15 20:24:11.558913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.296 [2024-07-15 20:24:11.558927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.296 [2024-07-15 20:24:11.558935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.296 [2024-07-15 20:24:11.559155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.296 [2024-07-15 20:24:11.559372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.296 [2024-07-15 20:24:11.559379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.296 [2024-07-15 20:24:11.559386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.296 [2024-07-15 20:24:11.562877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.296 [2024-07-15 20:24:11.572165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.296 [2024-07-15 20:24:11.572799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.296 [2024-07-15 20:24:11.572814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.296 [2024-07-15 20:24:11.572822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.296 [2024-07-15 20:24:11.573037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.296 [2024-07-15 20:24:11.573261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.296 [2024-07-15 20:24:11.573269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.296 [2024-07-15 20:24:11.573276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.296 [2024-07-15 20:24:11.576772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.296 [2024-07-15 20:24:11.586049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.296 [2024-07-15 20:24:11.586714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.296 [2024-07-15 20:24:11.586728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.296 [2024-07-15 20:24:11.586736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.296 [2024-07-15 20:24:11.586955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.296 [2024-07-15 20:24:11.587176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.296 [2024-07-15 20:24:11.587184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.296 [2024-07-15 20:24:11.587191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.296 [2024-07-15 20:24:11.590685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.296 [2024-07-15 20:24:11.599962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.296 [2024-07-15 20:24:11.600489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.296 [2024-07-15 20:24:11.600504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.296 [2024-07-15 20:24:11.600511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.296 [2024-07-15 20:24:11.600726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.296 [2024-07-15 20:24:11.600942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.296 [2024-07-15 20:24:11.600950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.296 [2024-07-15 20:24:11.600957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.296 [2024-07-15 20:24:11.604457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.296 [2024-07-15 20:24:11.613735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.296 [2024-07-15 20:24:11.614447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.296 [2024-07-15 20:24:11.614484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.296 [2024-07-15 20:24:11.614494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.296 [2024-07-15 20:24:11.614732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.296 [2024-07-15 20:24:11.614952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.296 [2024-07-15 20:24:11.614960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.296 [2024-07-15 20:24:11.614968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.296 [2024-07-15 20:24:11.618470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.296 [2024-07-15 20:24:11.627540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.296 [2024-07-15 20:24:11.628223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.296 [2024-07-15 20:24:11.628242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.296 [2024-07-15 20:24:11.628249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.296 [2024-07-15 20:24:11.628466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.296 [2024-07-15 20:24:11.628682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.296 [2024-07-15 20:24:11.628689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.296 [2024-07-15 20:24:11.628701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.296 [2024-07-15 20:24:11.632194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.296 [2024-07-15 20:24:11.641460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.296 [2024-07-15 20:24:11.642073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.296 [2024-07-15 20:24:11.642088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.296 [2024-07-15 20:24:11.642095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.296 [2024-07-15 20:24:11.642316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.296 [2024-07-15 20:24:11.642532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.296 [2024-07-15 20:24:11.642540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.297 [2024-07-15 20:24:11.642546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.297 [2024-07-15 20:24:11.646035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.297 [2024-07-15 20:24:11.655315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.297 [2024-07-15 20:24:11.655932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.297 [2024-07-15 20:24:11.655948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.297 [2024-07-15 20:24:11.655955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.297 [2024-07-15 20:24:11.656177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.297 [2024-07-15 20:24:11.656394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.297 [2024-07-15 20:24:11.656401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.297 [2024-07-15 20:24:11.656408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.297 [2024-07-15 20:24:11.659903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.297 [2024-07-15 20:24:11.669189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.297 [2024-07-15 20:24:11.669802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.297 [2024-07-15 20:24:11.669816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.297 [2024-07-15 20:24:11.669824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.297 [2024-07-15 20:24:11.670039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.297 [2024-07-15 20:24:11.670263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.297 [2024-07-15 20:24:11.670271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.297 [2024-07-15 20:24:11.670278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.297 [2024-07-15 20:24:11.673772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.297 [2024-07-15 20:24:11.683054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.297 [2024-07-15 20:24:11.683807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.297 [2024-07-15 20:24:11.683847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.297 [2024-07-15 20:24:11.683857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.297 [2024-07-15 20:24:11.684094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.297 [2024-07-15 20:24:11.684322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.297 [2024-07-15 20:24:11.684332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.297 [2024-07-15 20:24:11.684339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.297 [2024-07-15 20:24:11.687842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.297 [2024-07-15 20:24:11.696922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.297 [2024-07-15 20:24:11.697569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.297 [2024-07-15 20:24:11.697587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.297 [2024-07-15 20:24:11.697595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.297 [2024-07-15 20:24:11.697812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.297 [2024-07-15 20:24:11.698028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.297 [2024-07-15 20:24:11.698036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.297 [2024-07-15 20:24:11.698043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.297 [2024-07-15 20:24:11.701545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.297 [2024-07-15 20:24:11.710821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.297 [2024-07-15 20:24:11.711436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.297 [2024-07-15 20:24:11.711451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.297 [2024-07-15 20:24:11.711459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.297 [2024-07-15 20:24:11.711675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.297 [2024-07-15 20:24:11.711890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.297 [2024-07-15 20:24:11.711898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.297 [2024-07-15 20:24:11.711905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.297 [2024-07-15 20:24:11.715403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.297 [2024-07-15 20:24:11.724683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.297 [2024-07-15 20:24:11.725310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.297 [2024-07-15 20:24:11.725325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.297 [2024-07-15 20:24:11.725332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.297 [2024-07-15 20:24:11.725548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.559 [2024-07-15 20:24:11.725767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.559 [2024-07-15 20:24:11.725777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.559 [2024-07-15 20:24:11.725783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.559 [2024-07-15 20:24:11.729284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.559 [2024-07-15 20:24:11.738575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.559 [2024-07-15 20:24:11.739195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.559 [2024-07-15 20:24:11.739210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.559 [2024-07-15 20:24:11.739217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.559 [2024-07-15 20:24:11.739433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.559 [2024-07-15 20:24:11.739648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.559 [2024-07-15 20:24:11.739656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.559 [2024-07-15 20:24:11.739662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.559 [2024-07-15 20:24:11.743160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.559 [2024-07-15 20:24:11.752448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.559 [2024-07-15 20:24:11.753199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.559 [2024-07-15 20:24:11.753236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.559 [2024-07-15 20:24:11.753248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.559 [2024-07-15 20:24:11.753487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.559 [2024-07-15 20:24:11.753707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.559 [2024-07-15 20:24:11.753716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.559 [2024-07-15 20:24:11.753723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.559 [2024-07-15 20:24:11.757227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.559 [2024-07-15 20:24:11.766296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.559 [2024-07-15 20:24:11.767061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.559 [2024-07-15 20:24:11.767097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.559 [2024-07-15 20:24:11.767109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.559 [2024-07-15 20:24:11.767356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.559 [2024-07-15 20:24:11.767577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.559 [2024-07-15 20:24:11.767585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.559 [2024-07-15 20:24:11.767592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.559 [2024-07-15 20:24:11.771099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.559 [2024-07-15 20:24:11.780177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.559 [2024-07-15 20:24:11.780809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.559 [2024-07-15 20:24:11.780827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.559 [2024-07-15 20:24:11.780835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.559 [2024-07-15 20:24:11.781052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.560 [2024-07-15 20:24:11.781275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.560 [2024-07-15 20:24:11.781284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.560 [2024-07-15 20:24:11.781290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.560 [2024-07-15 20:24:11.784788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.560 [2024-07-15 20:24:11.794066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.560 [2024-07-15 20:24:11.794736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.560 [2024-07-15 20:24:11.794751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.560 [2024-07-15 20:24:11.794759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.560 [2024-07-15 20:24:11.794974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.560 [2024-07-15 20:24:11.795195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.560 [2024-07-15 20:24:11.795203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.560 [2024-07-15 20:24:11.795210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.560 [2024-07-15 20:24:11.798708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.560 [2024-07-15 20:24:11.807985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.560 [2024-07-15 20:24:11.808624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.560 [2024-07-15 20:24:11.808660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.560 [2024-07-15 20:24:11.808670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.560 [2024-07-15 20:24:11.808906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.560 [2024-07-15 20:24:11.809135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.560 [2024-07-15 20:24:11.809145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.560 [2024-07-15 20:24:11.809152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.560 [2024-07-15 20:24:11.812653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.560 [2024-07-15 20:24:11.821728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.560 [2024-07-15 20:24:11.822447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.560 [2024-07-15 20:24:11.822484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.560 [2024-07-15 20:24:11.822499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.560 [2024-07-15 20:24:11.822735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.560 [2024-07-15 20:24:11.822955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.560 [2024-07-15 20:24:11.822963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.560 [2024-07-15 20:24:11.822971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.560 [2024-07-15 20:24:11.826474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.560 [2024-07-15 20:24:11.835542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.560 [2024-07-15 20:24:11.836229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.560 [2024-07-15 20:24:11.836265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.560 [2024-07-15 20:24:11.836275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.560 [2024-07-15 20:24:11.836511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.560 [2024-07-15 20:24:11.836731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.560 [2024-07-15 20:24:11.836740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.560 [2024-07-15 20:24:11.836747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.560 [2024-07-15 20:24:11.840252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.560 [2024-07-15 20:24:11.849327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.560 [2024-07-15 20:24:11.850085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.560 [2024-07-15 20:24:11.850121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.560 [2024-07-15 20:24:11.850140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.560 [2024-07-15 20:24:11.850376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.560 [2024-07-15 20:24:11.850596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.560 [2024-07-15 20:24:11.850604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.560 [2024-07-15 20:24:11.850611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.560 [2024-07-15 20:24:11.854108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.560 [2024-07-15 20:24:11.863173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.560 [2024-07-15 20:24:11.863929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.560 [2024-07-15 20:24:11.863965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.560 [2024-07-15 20:24:11.863975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.560 [2024-07-15 20:24:11.864219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.560 [2024-07-15 20:24:11.864440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.560 [2024-07-15 20:24:11.864453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.560 [2024-07-15 20:24:11.864461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.560 [2024-07-15 20:24:11.867960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.560 [2024-07-15 20:24:11.877032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.560 [2024-07-15 20:24:11.877646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.560 [2024-07-15 20:24:11.877683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.560 [2024-07-15 20:24:11.877693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.560 [2024-07-15 20:24:11.877929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.560 [2024-07-15 20:24:11.878158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.560 [2024-07-15 20:24:11.878167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.560 [2024-07-15 20:24:11.878175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.560 [2024-07-15 20:24:11.881673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.560 [2024-07-15 20:24:11.890942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.560 [2024-07-15 20:24:11.891704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.560 [2024-07-15 20:24:11.891741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.560 [2024-07-15 20:24:11.891751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.560 [2024-07-15 20:24:11.891987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.560 [2024-07-15 20:24:11.892216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.560 [2024-07-15 20:24:11.892225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.560 [2024-07-15 20:24:11.892232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.560 [2024-07-15 20:24:11.895730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.560 [2024-07-15 20:24:11.904795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.560 [2024-07-15 20:24:11.905513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.560 [2024-07-15 20:24:11.905550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.560 [2024-07-15 20:24:11.905560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.560 [2024-07-15 20:24:11.905796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.560 [2024-07-15 20:24:11.906016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.560 [2024-07-15 20:24:11.906024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.560 [2024-07-15 20:24:11.906032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.560 [2024-07-15 20:24:11.909537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.560 [2024-07-15 20:24:11.918610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.560 [2024-07-15 20:24:11.919389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.560 [2024-07-15 20:24:11.919425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.560 [2024-07-15 20:24:11.919435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.560 [2024-07-15 20:24:11.919671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.560 [2024-07-15 20:24:11.919891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.560 [2024-07-15 20:24:11.919899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.560 [2024-07-15 20:24:11.919907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.560 [2024-07-15 20:24:11.923414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.560 [2024-07-15 20:24:11.932479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.560 [2024-07-15 20:24:11.933212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.560 [2024-07-15 20:24:11.933248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.560 [2024-07-15 20:24:11.933260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.561 [2024-07-15 20:24:11.933497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.561 [2024-07-15 20:24:11.933717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.561 [2024-07-15 20:24:11.933725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.561 [2024-07-15 20:24:11.933732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.561 [2024-07-15 20:24:11.937239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.561 [2024-07-15 20:24:11.946323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.561 [2024-07-15 20:24:11.947039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.561 [2024-07-15 20:24:11.947075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.561 [2024-07-15 20:24:11.947086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.561 [2024-07-15 20:24:11.947331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.561 [2024-07-15 20:24:11.947552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.561 [2024-07-15 20:24:11.947560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.561 [2024-07-15 20:24:11.947567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.561 [2024-07-15 20:24:11.951062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.561 [2024-07-15 20:24:11.960133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.561 [2024-07-15 20:24:11.960850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.561 [2024-07-15 20:24:11.960886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.561 [2024-07-15 20:24:11.960896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.561 [2024-07-15 20:24:11.961145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.561 [2024-07-15 20:24:11.961366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.561 [2024-07-15 20:24:11.961375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.561 [2024-07-15 20:24:11.961382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.561 [2024-07-15 20:24:11.964879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.561 [2024-07-15 20:24:11.973953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.561 [2024-07-15 20:24:11.974671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.561 [2024-07-15 20:24:11.974707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.561 [2024-07-15 20:24:11.974718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.561 [2024-07-15 20:24:11.974954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.561 [2024-07-15 20:24:11.975182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.561 [2024-07-15 20:24:11.975192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.561 [2024-07-15 20:24:11.975199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.561 [2024-07-15 20:24:11.978697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.561 [2024-07-15 20:24:11.987761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.561 [2024-07-15 20:24:11.988505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.561 [2024-07-15 20:24:11.988542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.561 [2024-07-15 20:24:11.988552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.561 [2024-07-15 20:24:11.988788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.561 [2024-07-15 20:24:11.989008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.561 [2024-07-15 20:24:11.989017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.561 [2024-07-15 20:24:11.989024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.823 [2024-07-15 20:24:11.992531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.823 [2024-07-15 20:24:12.001598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.823 [2024-07-15 20:24:12.002408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.823 [2024-07-15 20:24:12.002445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.823 [2024-07-15 20:24:12.002455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.823 [2024-07-15 20:24:12.002691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.823 [2024-07-15 20:24:12.002911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.823 [2024-07-15 20:24:12.002920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.823 [2024-07-15 20:24:12.002933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.823 [2024-07-15 20:24:12.006437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.823 [2024-07-15 20:24:12.015508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.823 [2024-07-15 20:24:12.016181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.823 [2024-07-15 20:24:12.016206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.823 [2024-07-15 20:24:12.016214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.823 [2024-07-15 20:24:12.016435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.823 [2024-07-15 20:24:12.016652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.823 [2024-07-15 20:24:12.016661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.823 [2024-07-15 20:24:12.016668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.823 [2024-07-15 20:24:12.020168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.823 [2024-07-15 20:24:12.029437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.823 [2024-07-15 20:24:12.030100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.823 [2024-07-15 20:24:12.030116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.823 [2024-07-15 20:24:12.030128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.823 [2024-07-15 20:24:12.030344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.823 [2024-07-15 20:24:12.030560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.823 [2024-07-15 20:24:12.030568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.823 [2024-07-15 20:24:12.030574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.823 [2024-07-15 20:24:12.034062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.823 [2024-07-15 20:24:12.043241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.823 [2024-07-15 20:24:12.043872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.823 [2024-07-15 20:24:12.043889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.823 [2024-07-15 20:24:12.043897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.823 [2024-07-15 20:24:12.044113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.823 [2024-07-15 20:24:12.044334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.823 [2024-07-15 20:24:12.044342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.823 [2024-07-15 20:24:12.044349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.823 [2024-07-15 20:24:12.047854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.823 [2024-07-15 20:24:12.057123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.823 [2024-07-15 20:24:12.057755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.823 [2024-07-15 20:24:12.057791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.823 [2024-07-15 20:24:12.057801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.823 [2024-07-15 20:24:12.058037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.823 [2024-07-15 20:24:12.058270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.823 [2024-07-15 20:24:12.058280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.823 [2024-07-15 20:24:12.058288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.823 [2024-07-15 20:24:12.061785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.823 [2024-07-15 20:24:12.071056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.823 [2024-07-15 20:24:12.071814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.823 [2024-07-15 20:24:12.071851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.823 [2024-07-15 20:24:12.071861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.823 [2024-07-15 20:24:12.072097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.823 [2024-07-15 20:24:12.072326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.823 [2024-07-15 20:24:12.072335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.823 [2024-07-15 20:24:12.072342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.823 [2024-07-15 20:24:12.075839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.823 [2024-07-15 20:24:12.084910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.823 [2024-07-15 20:24:12.085624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.823 [2024-07-15 20:24:12.085662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.823 [2024-07-15 20:24:12.085673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.823 [2024-07-15 20:24:12.085912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.823 [2024-07-15 20:24:12.086141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.823 [2024-07-15 20:24:12.086151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.823 [2024-07-15 20:24:12.086158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.823 [2024-07-15 20:24:12.089654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.823 [2024-07-15 20:24:12.098724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.823 [2024-07-15 20:24:12.099446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.823 [2024-07-15 20:24:12.099483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.823 [2024-07-15 20:24:12.099494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.823 [2024-07-15 20:24:12.099730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.823 [2024-07-15 20:24:12.099955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.823 [2024-07-15 20:24:12.099963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.823 [2024-07-15 20:24:12.099971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.823 [2024-07-15 20:24:12.103477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.823 [2024-07-15 20:24:12.112544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.823 [2024-07-15 20:24:12.113326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.823 [2024-07-15 20:24:12.113362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.823 [2024-07-15 20:24:12.113374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.823 [2024-07-15 20:24:12.113613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.823 [2024-07-15 20:24:12.113833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.823 [2024-07-15 20:24:12.113847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.823 [2024-07-15 20:24:12.113855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.823 [2024-07-15 20:24:12.117360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.823 [2024-07-15 20:24:12.126434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.824 [2024-07-15 20:24:12.127112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.824 [2024-07-15 20:24:12.127135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.824 [2024-07-15 20:24:12.127143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.824 [2024-07-15 20:24:12.127360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.824 [2024-07-15 20:24:12.127576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.824 [2024-07-15 20:24:12.127583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.824 [2024-07-15 20:24:12.127590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.824 [2024-07-15 20:24:12.131085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.824 [2024-07-15 20:24:12.140360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.824 [2024-07-15 20:24:12.140973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.824 [2024-07-15 20:24:12.140988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.824 [2024-07-15 20:24:12.140995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.824 [2024-07-15 20:24:12.141218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.824 [2024-07-15 20:24:12.141434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.824 [2024-07-15 20:24:12.141442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.824 [2024-07-15 20:24:12.141448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.824 [2024-07-15 20:24:12.144941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.824 [2024-07-15 20:24:12.154238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.824 [2024-07-15 20:24:12.154981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.824 [2024-07-15 20:24:12.155017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.824 [2024-07-15 20:24:12.155027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.824 [2024-07-15 20:24:12.155272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.824 [2024-07-15 20:24:12.155492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.824 [2024-07-15 20:24:12.155501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.824 [2024-07-15 20:24:12.155508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.824 [2024-07-15 20:24:12.159005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.824 [2024-07-15 20:24:12.168072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.824 [2024-07-15 20:24:12.168797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.824 [2024-07-15 20:24:12.168833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.824 [2024-07-15 20:24:12.168844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.824 [2024-07-15 20:24:12.169079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.824 [2024-07-15 20:24:12.169308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.824 [2024-07-15 20:24:12.169318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.824 [2024-07-15 20:24:12.169326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.824 [2024-07-15 20:24:12.172823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.824 [2024-07-15 20:24:12.181892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.824 [2024-07-15 20:24:12.182524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.824 [2024-07-15 20:24:12.182561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.824 [2024-07-15 20:24:12.182571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.824 [2024-07-15 20:24:12.182807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.824 [2024-07-15 20:24:12.183027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.824 [2024-07-15 20:24:12.183036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.824 [2024-07-15 20:24:12.183043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.824 [2024-07-15 20:24:12.186550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.824 [2024-07-15 20:24:12.195826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.824 [2024-07-15 20:24:12.196568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.824 [2024-07-15 20:24:12.196604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.824 [2024-07-15 20:24:12.196619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.824 [2024-07-15 20:24:12.196856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.824 [2024-07-15 20:24:12.197075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.824 [2024-07-15 20:24:12.197083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.824 [2024-07-15 20:24:12.197091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.824 [2024-07-15 20:24:12.200596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.824 [2024-07-15 20:24:12.209667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.824 [2024-07-15 20:24:12.210400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.824 [2024-07-15 20:24:12.210437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.824 [2024-07-15 20:24:12.210447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.824 [2024-07-15 20:24:12.210683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.824 [2024-07-15 20:24:12.210902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.824 [2024-07-15 20:24:12.210911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.824 [2024-07-15 20:24:12.210918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.824 [2024-07-15 20:24:12.214426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.824 [2024-07-15 20:24:12.223494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.824 [2024-07-15 20:24:12.224205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.824 [2024-07-15 20:24:12.224241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.824 [2024-07-15 20:24:12.224252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.824 [2024-07-15 20:24:12.224492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.824 [2024-07-15 20:24:12.224712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.824 [2024-07-15 20:24:12.224720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.824 [2024-07-15 20:24:12.224728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.824 [2024-07-15 20:24:12.228233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.824 [2024-07-15 20:24:12.237346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.824 [2024-07-15 20:24:12.238112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.824 [2024-07-15 20:24:12.238155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.824 [2024-07-15 20:24:12.238167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.824 [2024-07-15 20:24:12.238405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.824 [2024-07-15 20:24:12.238625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.824 [2024-07-15 20:24:12.238638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.824 [2024-07-15 20:24:12.238646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.824 [2024-07-15 20:24:12.242154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.824 [2024-07-15 20:24:12.251232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.824 [2024-07-15 20:24:12.251960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.824 [2024-07-15 20:24:12.251996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:14.824 [2024-07-15 20:24:12.252006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:14.824 [2024-07-15 20:24:12.252251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:14.824 [2024-07-15 20:24:12.252472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.824 [2024-07-15 20:24:12.252480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.824 [2024-07-15 20:24:12.252488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.087 [2024-07-15 20:24:12.255985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.087 [2024-07-15 20:24:12.265053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.087 [2024-07-15 20:24:12.265810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.087 [2024-07-15 20:24:12.265847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.087 [2024-07-15 20:24:12.265857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.087 [2024-07-15 20:24:12.266094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.087 [2024-07-15 20:24:12.266322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.087 [2024-07-15 20:24:12.266332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.087 [2024-07-15 20:24:12.266340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.087 [2024-07-15 20:24:12.269838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.087 [2024-07-15 20:24:12.278900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.087 [2024-07-15 20:24:12.279641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.087 [2024-07-15 20:24:12.279677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.087 [2024-07-15 20:24:12.279688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.087 [2024-07-15 20:24:12.279924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.087 [2024-07-15 20:24:12.280151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.087 [2024-07-15 20:24:12.280160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.087 [2024-07-15 20:24:12.280167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.087 [2024-07-15 20:24:12.283663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.087 [2024-07-15 20:24:12.292738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.087 [2024-07-15 20:24:12.293468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.087 [2024-07-15 20:24:12.293505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.087 [2024-07-15 20:24:12.293515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.087 [2024-07-15 20:24:12.293751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.087 [2024-07-15 20:24:12.293971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.087 [2024-07-15 20:24:12.293979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.087 [2024-07-15 20:24:12.293987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.087 [2024-07-15 20:24:12.297535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.087 [2024-07-15 20:24:12.306606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.087 [2024-07-15 20:24:12.307400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.087 [2024-07-15 20:24:12.307436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.087 [2024-07-15 20:24:12.307447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.087 [2024-07-15 20:24:12.307683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.087 [2024-07-15 20:24:12.307903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.087 [2024-07-15 20:24:12.307911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.087 [2024-07-15 20:24:12.307918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.087 [2024-07-15 20:24:12.311427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.087 [2024-07-15 20:24:12.320491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.087 [2024-07-15 20:24:12.321224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.087 [2024-07-15 20:24:12.321261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.087 [2024-07-15 20:24:12.321273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.087 [2024-07-15 20:24:12.321511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.087 [2024-07-15 20:24:12.321731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.087 [2024-07-15 20:24:12.321739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.087 [2024-07-15 20:24:12.321746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.087 [2024-07-15 20:24:12.325253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.087 [2024-07-15 20:24:12.334319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.087 [2024-07-15 20:24:12.335074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.087 [2024-07-15 20:24:12.335111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.087 [2024-07-15 20:24:12.335129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.087 [2024-07-15 20:24:12.335370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.087 [2024-07-15 20:24:12.335591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.087 [2024-07-15 20:24:12.335599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.087 [2024-07-15 20:24:12.335606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.087 [2024-07-15 20:24:12.339101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.087 [2024-07-15 20:24:12.348189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.087 [2024-07-15 20:24:12.348906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.087 [2024-07-15 20:24:12.348942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.087 [2024-07-15 20:24:12.348952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.087 [2024-07-15 20:24:12.349196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.087 [2024-07-15 20:24:12.349417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.087 [2024-07-15 20:24:12.349426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.087 [2024-07-15 20:24:12.349433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.087 [2024-07-15 20:24:12.352930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.087 [2024-07-15 20:24:12.362007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.087 [2024-07-15 20:24:12.362771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.087 [2024-07-15 20:24:12.362806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.087 [2024-07-15 20:24:12.362817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.087 [2024-07-15 20:24:12.363052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.087 [2024-07-15 20:24:12.363281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.087 [2024-07-15 20:24:12.363291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.087 [2024-07-15 20:24:12.363298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.087 [2024-07-15 20:24:12.366797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.087 [2024-07-15 20:24:12.375859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.087 [2024-07-15 20:24:12.376578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.087 [2024-07-15 20:24:12.376615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.087 [2024-07-15 20:24:12.376626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.087 [2024-07-15 20:24:12.376861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.087 [2024-07-15 20:24:12.377081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.087 [2024-07-15 20:24:12.377089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.087 [2024-07-15 20:24:12.377101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.087 [2024-07-15 20:24:12.380605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.087 [2024-07-15 20:24:12.389671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.087 [2024-07-15 20:24:12.390209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.087 [2024-07-15 20:24:12.390245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.087 [2024-07-15 20:24:12.390256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.087 [2024-07-15 20:24:12.390496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.087 [2024-07-15 20:24:12.390716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.087 [2024-07-15 20:24:12.390724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.087 [2024-07-15 20:24:12.390731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.087 [2024-07-15 20:24:12.394238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.087 [2024-07-15 20:24:12.403507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.087 [2024-07-15 20:24:12.404135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.087 [2024-07-15 20:24:12.404152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.087 [2024-07-15 20:24:12.404160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.087 [2024-07-15 20:24:12.404376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.087 [2024-07-15 20:24:12.404592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.088 [2024-07-15 20:24:12.404600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.088 [2024-07-15 20:24:12.404606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.088 [2024-07-15 20:24:12.408100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.088 [2024-07-15 20:24:12.417368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.088 [2024-07-15 20:24:12.418090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.088 [2024-07-15 20:24:12.418133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.088 [2024-07-15 20:24:12.418144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.088 [2024-07-15 20:24:12.418379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.088 [2024-07-15 20:24:12.418600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.088 [2024-07-15 20:24:12.418608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.088 [2024-07-15 20:24:12.418616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.088 [2024-07-15 20:24:12.422118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.088 [2024-07-15 20:24:12.431218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.088 [2024-07-15 20:24:12.431933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.088 [2024-07-15 20:24:12.431969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.088 [2024-07-15 20:24:12.431980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.088 [2024-07-15 20:24:12.432225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.088 [2024-07-15 20:24:12.432447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.088 [2024-07-15 20:24:12.432455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.088 [2024-07-15 20:24:12.432462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.088 [2024-07-15 20:24:12.435959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.088 [2024-07-15 20:24:12.445027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.088 [2024-07-15 20:24:12.445727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.088 [2024-07-15 20:24:12.445763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.088 [2024-07-15 20:24:12.445774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.088 [2024-07-15 20:24:12.446010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.088 [2024-07-15 20:24:12.446239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.088 [2024-07-15 20:24:12.446248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.088 [2024-07-15 20:24:12.446256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.088 [2024-07-15 20:24:12.449767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.088 [2024-07-15 20:24:12.458834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.088 [2024-07-15 20:24:12.459559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.088 [2024-07-15 20:24:12.459595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.088 [2024-07-15 20:24:12.459606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.088 [2024-07-15 20:24:12.459842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.088 [2024-07-15 20:24:12.460062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.088 [2024-07-15 20:24:12.460070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.088 [2024-07-15 20:24:12.460077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.088 [2024-07-15 20:24:12.463584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.088 [2024-07-15 20:24:12.472649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.088 [2024-07-15 20:24:12.473242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.088 [2024-07-15 20:24:12.473277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.088 [2024-07-15 20:24:12.473289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.088 [2024-07-15 20:24:12.473525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.088 [2024-07-15 20:24:12.473752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.088 [2024-07-15 20:24:12.473761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.088 [2024-07-15 20:24:12.473769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.088 [2024-07-15 20:24:12.477276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.088 [2024-07-15 20:24:12.486547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.088 [2024-07-15 20:24:12.487336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.088 [2024-07-15 20:24:12.487372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.088 [2024-07-15 20:24:12.487383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.088 [2024-07-15 20:24:12.487619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.088 [2024-07-15 20:24:12.487839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.088 [2024-07-15 20:24:12.487847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.088 [2024-07-15 20:24:12.487855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.088 [2024-07-15 20:24:12.491359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.088 [2024-07-15 20:24:12.500429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.088 [2024-07-15 20:24:12.501159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.088 [2024-07-15 20:24:12.501195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.088 [2024-07-15 20:24:12.501205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.088 [2024-07-15 20:24:12.501442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.088 [2024-07-15 20:24:12.501662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.088 [2024-07-15 20:24:12.501670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.088 [2024-07-15 20:24:12.501677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.088 [2024-07-15 20:24:12.505182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.088 [2024-07-15 20:24:12.514249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.088 [2024-07-15 20:24:12.515007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.088 [2024-07-15 20:24:12.515043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.088 [2024-07-15 20:24:12.515053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.088 [2024-07-15 20:24:12.515299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.088 [2024-07-15 20:24:12.515520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.088 [2024-07-15 20:24:12.515529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.088 [2024-07-15 20:24:12.515537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.351 [2024-07-15 20:24:12.519041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.351 [2024-07-15 20:24:12.528112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.351 [2024-07-15 20:24:12.528863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-07-15 20:24:12.528899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.351 [2024-07-15 20:24:12.528910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.351 [2024-07-15 20:24:12.529153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.351 [2024-07-15 20:24:12.529374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.351 [2024-07-15 20:24:12.529382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.351 [2024-07-15 20:24:12.529390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.351 [2024-07-15 20:24:12.532888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.351 [2024-07-15 20:24:12.541958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.351 [2024-07-15 20:24:12.542407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-07-15 20:24:12.542428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.351 [2024-07-15 20:24:12.542436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.351 [2024-07-15 20:24:12.542653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.351 [2024-07-15 20:24:12.542870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.351 [2024-07-15 20:24:12.542878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.351 [2024-07-15 20:24:12.542885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.351 [2024-07-15 20:24:12.546380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.351 [2024-07-15 20:24:12.555866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.351 [2024-07-15 20:24:12.556407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-07-15 20:24:12.556424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.351 [2024-07-15 20:24:12.556431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.351 [2024-07-15 20:24:12.556646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.351 [2024-07-15 20:24:12.556862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.351 [2024-07-15 20:24:12.556870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.351 [2024-07-15 20:24:12.556877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.351 [2024-07-15 20:24:12.560372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.351 [2024-07-15 20:24:12.569641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.351 [2024-07-15 20:24:12.570259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-07-15 20:24:12.570275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.351 [2024-07-15 20:24:12.570287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.351 [2024-07-15 20:24:12.570503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.351 [2024-07-15 20:24:12.570718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.351 [2024-07-15 20:24:12.570726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.351 [2024-07-15 20:24:12.570733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.351 [2024-07-15 20:24:12.574231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.351 [2024-07-15 20:24:12.583500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.351 [2024-07-15 20:24:12.584036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-07-15 20:24:12.584050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.351 [2024-07-15 20:24:12.584057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.351 [2024-07-15 20:24:12.584277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.351 [2024-07-15 20:24:12.584493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.351 [2024-07-15 20:24:12.584501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.351 [2024-07-15 20:24:12.584508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.351 [2024-07-15 20:24:12.587997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.351 [2024-07-15 20:24:12.597265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.351 [2024-07-15 20:24:12.597882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-07-15 20:24:12.597896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.351 [2024-07-15 20:24:12.597904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.351 [2024-07-15 20:24:12.598119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.351 [2024-07-15 20:24:12.598340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.351 [2024-07-15 20:24:12.598348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.351 [2024-07-15 20:24:12.598354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.351 [2024-07-15 20:24:12.601843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.351 [2024-07-15 20:24:12.611105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.351 [2024-07-15 20:24:12.611729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-07-15 20:24:12.611743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.351 [2024-07-15 20:24:12.611751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.351 [2024-07-15 20:24:12.611966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.351 [2024-07-15 20:24:12.612187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.351 [2024-07-15 20:24:12.612199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.351 [2024-07-15 20:24:12.612206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.351 [2024-07-15 20:24:12.615694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.351 [2024-07-15 20:24:12.625015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.352 [2024-07-15 20:24:12.625613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-07-15 20:24:12.625629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.352 [2024-07-15 20:24:12.625637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.352 [2024-07-15 20:24:12.625852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.352 [2024-07-15 20:24:12.626068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.352 [2024-07-15 20:24:12.626076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.352 [2024-07-15 20:24:12.626082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.352 [2024-07-15 20:24:12.629578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.352 [2024-07-15 20:24:12.638848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.352 [2024-07-15 20:24:12.639497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-07-15 20:24:12.639534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.352 [2024-07-15 20:24:12.639545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.352 [2024-07-15 20:24:12.639783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.352 [2024-07-15 20:24:12.640003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.352 [2024-07-15 20:24:12.640012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.352 [2024-07-15 20:24:12.640019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.352 [2024-07-15 20:24:12.643523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.352 [2024-07-15 20:24:12.652599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.352 [2024-07-15 20:24:12.653230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-07-15 20:24:12.653250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.352 [2024-07-15 20:24:12.653257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.352 [2024-07-15 20:24:12.653474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.352 [2024-07-15 20:24:12.653690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.352 [2024-07-15 20:24:12.653698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.352 [2024-07-15 20:24:12.653705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.352 [2024-07-15 20:24:12.657204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.352 [2024-07-15 20:24:12.666484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.352 [2024-07-15 20:24:12.667106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-07-15 20:24:12.667125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.352 [2024-07-15 20:24:12.667133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.352 [2024-07-15 20:24:12.667349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.352 [2024-07-15 20:24:12.667565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.352 [2024-07-15 20:24:12.667573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.352 [2024-07-15 20:24:12.667579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.352 [2024-07-15 20:24:12.671069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.352 [2024-07-15 20:24:12.680379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.352 [2024-07-15 20:24:12.680931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-07-15 20:24:12.680946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.352 [2024-07-15 20:24:12.680953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.352 [2024-07-15 20:24:12.681174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.352 [2024-07-15 20:24:12.681390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.352 [2024-07-15 20:24:12.681398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.352 [2024-07-15 20:24:12.681405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.352 [2024-07-15 20:24:12.684897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.352 [2024-07-15 20:24:12.694172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.352 [2024-07-15 20:24:12.694798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-07-15 20:24:12.694812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.352 [2024-07-15 20:24:12.694820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.352 [2024-07-15 20:24:12.695035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.352 [2024-07-15 20:24:12.695257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.352 [2024-07-15 20:24:12.695266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.352 [2024-07-15 20:24:12.695273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.352 [2024-07-15 20:24:12.698763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.352 [2024-07-15 20:24:12.708030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.352 [2024-07-15 20:24:12.708677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-07-15 20:24:12.708692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.352 [2024-07-15 20:24:12.708700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.352 [2024-07-15 20:24:12.708918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.352 [2024-07-15 20:24:12.709139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.352 [2024-07-15 20:24:12.709146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.352 [2024-07-15 20:24:12.709153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.352 [2024-07-15 20:24:12.712644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.352 [2024-07-15 20:24:12.721914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.352 [2024-07-15 20:24:12.722543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-07-15 20:24:12.722558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.352 [2024-07-15 20:24:12.722565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.352 [2024-07-15 20:24:12.722781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.352 [2024-07-15 20:24:12.722997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.352 [2024-07-15 20:24:12.723004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.352 [2024-07-15 20:24:12.723011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.352 [2024-07-15 20:24:12.726504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.352 [2024-07-15 20:24:12.735772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.352 [2024-07-15 20:24:12.736332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-07-15 20:24:12.736347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.352 [2024-07-15 20:24:12.736354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.352 [2024-07-15 20:24:12.736570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.352 [2024-07-15 20:24:12.736785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.352 [2024-07-15 20:24:12.736794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.352 [2024-07-15 20:24:12.736800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.352 [2024-07-15 20:24:12.740293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.352 [2024-07-15 20:24:12.749558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.352 [2024-07-15 20:24:12.750201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-07-15 20:24:12.750216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.352 [2024-07-15 20:24:12.750223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.353 [2024-07-15 20:24:12.750439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.353 [2024-07-15 20:24:12.750654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.353 [2024-07-15 20:24:12.750662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.353 [2024-07-15 20:24:12.750672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.353 [2024-07-15 20:24:12.754177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.353 [2024-07-15 20:24:12.763448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.353 [2024-07-15 20:24:12.764107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-07-15 20:24:12.764127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.353 [2024-07-15 20:24:12.764135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.353 [2024-07-15 20:24:12.764350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.353 [2024-07-15 20:24:12.764566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.353 [2024-07-15 20:24:12.764574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.353 [2024-07-15 20:24:12.764581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.353 [2024-07-15 20:24:12.768071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.353 [2024-07-15 20:24:12.777343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.353 [2024-07-15 20:24:12.777876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-07-15 20:24:12.777890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.353 [2024-07-15 20:24:12.777898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.353 [2024-07-15 20:24:12.778113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.353 [2024-07-15 20:24:12.778334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.353 [2024-07-15 20:24:12.778343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.353 [2024-07-15 20:24:12.778350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.616 [2024-07-15 20:24:12.781839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.616 [2024-07-15 20:24:12.791113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.616 [2024-07-15 20:24:12.791737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.616 [2024-07-15 20:24:12.791751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.616 [2024-07-15 20:24:12.791758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.616 [2024-07-15 20:24:12.791974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.616 [2024-07-15 20:24:12.792195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.616 [2024-07-15 20:24:12.792203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.616 [2024-07-15 20:24:12.792210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.616 [2024-07-15 20:24:12.795698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.616 [2024-07-15 20:24:12.804966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.616 [2024-07-15 20:24:12.805627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.616 [2024-07-15 20:24:12.805642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.616 [2024-07-15 20:24:12.805649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.616 [2024-07-15 20:24:12.805864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.616 [2024-07-15 20:24:12.806080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.616 [2024-07-15 20:24:12.806087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.616 [2024-07-15 20:24:12.806094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.616 [2024-07-15 20:24:12.809589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.616 [2024-07-15 20:24:12.818856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.616 [2024-07-15 20:24:12.819398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.616 [2024-07-15 20:24:12.819413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.616 [2024-07-15 20:24:12.819421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.616 [2024-07-15 20:24:12.819636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.617 [2024-07-15 20:24:12.819852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.617 [2024-07-15 20:24:12.819859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.617 [2024-07-15 20:24:12.819866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.617 [2024-07-15 20:24:12.823361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.617 [2024-07-15 20:24:12.832630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.617 [2024-07-15 20:24:12.833262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.617 [2024-07-15 20:24:12.833277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.617 [2024-07-15 20:24:12.833284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.617 [2024-07-15 20:24:12.833499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.617 [2024-07-15 20:24:12.833715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.617 [2024-07-15 20:24:12.833723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.617 [2024-07-15 20:24:12.833730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.617 [2024-07-15 20:24:12.837224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.617 [2024-07-15 20:24:12.846492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.617 [2024-07-15 20:24:12.847115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.617 [2024-07-15 20:24:12.847135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.617 [2024-07-15 20:24:12.847142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.617 [2024-07-15 20:24:12.847358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.617 [2024-07-15 20:24:12.847577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.617 [2024-07-15 20:24:12.847585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.617 [2024-07-15 20:24:12.847592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.617 [2024-07-15 20:24:12.851081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.617 [2024-07-15 20:24:12.860358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.617 [2024-07-15 20:24:12.860986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.617 [2024-07-15 20:24:12.861001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.617 [2024-07-15 20:24:12.861008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.617 [2024-07-15 20:24:12.861228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.617 [2024-07-15 20:24:12.861444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.617 [2024-07-15 20:24:12.861452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.617 [2024-07-15 20:24:12.861459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.617 [2024-07-15 20:24:12.864950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.617 [2024-07-15 20:24:12.874235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.617 [2024-07-15 20:24:12.874855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.617 [2024-07-15 20:24:12.874870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.617 [2024-07-15 20:24:12.874878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.617 [2024-07-15 20:24:12.875093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.617 [2024-07-15 20:24:12.875314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.617 [2024-07-15 20:24:12.875323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.617 [2024-07-15 20:24:12.875329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.617 [2024-07-15 20:24:12.878817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.617 [2024-07-15 20:24:12.888086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.617 [2024-07-15 20:24:12.888709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.617 [2024-07-15 20:24:12.888724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.617 [2024-07-15 20:24:12.888731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.617 [2024-07-15 20:24:12.888947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.617 [2024-07-15 20:24:12.889167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.617 [2024-07-15 20:24:12.889177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.617 [2024-07-15 20:24:12.889184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.617 [2024-07-15 20:24:12.892680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.617 [2024-07-15 20:24:12.901963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.617 [2024-07-15 20:24:12.902624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.617 [2024-07-15 20:24:12.902640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.617 [2024-07-15 20:24:12.902647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.617 [2024-07-15 20:24:12.902863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.617 [2024-07-15 20:24:12.903079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.617 [2024-07-15 20:24:12.903087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.617 [2024-07-15 20:24:12.903093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.617 [2024-07-15 20:24:12.906594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.617 [2024-07-15 20:24:12.915876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.617 [2024-07-15 20:24:12.916461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.617 [2024-07-15 20:24:12.916476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.617 [2024-07-15 20:24:12.916483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.617 [2024-07-15 20:24:12.916699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.617 [2024-07-15 20:24:12.916914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.617 [2024-07-15 20:24:12.916922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.617 [2024-07-15 20:24:12.916928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.617 [2024-07-15 20:24:12.920427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.617 [2024-07-15 20:24:12.929710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.617 [2024-07-15 20:24:12.930316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.617 [2024-07-15 20:24:12.930331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.617 [2024-07-15 20:24:12.930338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.617 [2024-07-15 20:24:12.930554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.617 [2024-07-15 20:24:12.930769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.617 [2024-07-15 20:24:12.930777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.617 [2024-07-15 20:24:12.930784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.617 [2024-07-15 20:24:12.934284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.617 [2024-07-15 20:24:12.943567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.617 [2024-07-15 20:24:12.944207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.617 [2024-07-15 20:24:12.944222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.617 [2024-07-15 20:24:12.944233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.617 [2024-07-15 20:24:12.944449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.617 [2024-07-15 20:24:12.944664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.617 [2024-07-15 20:24:12.944672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.617 [2024-07-15 20:24:12.944678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.617 [2024-07-15 20:24:12.948173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.617 [2024-07-15 20:24:12.957460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.617 [2024-07-15 20:24:12.958120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.617 [2024-07-15 20:24:12.958141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.617 [2024-07-15 20:24:12.958148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.617 [2024-07-15 20:24:12.958364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.617 [2024-07-15 20:24:12.958580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.617 [2024-07-15 20:24:12.958593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.617 [2024-07-15 20:24:12.958600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.617 [2024-07-15 20:24:12.962093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.617 [2024-07-15 20:24:12.971370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.617 [2024-07-15 20:24:12.972006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.617 [2024-07-15 20:24:12.972020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.617 [2024-07-15 20:24:12.972027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.617 [2024-07-15 20:24:12.972247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.618 [2024-07-15 20:24:12.972463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.618 [2024-07-15 20:24:12.972471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.618 [2024-07-15 20:24:12.972478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.618 [2024-07-15 20:24:12.975970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.618 [2024-07-15 20:24:12.985264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.618 [2024-07-15 20:24:12.985880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.618 [2024-07-15 20:24:12.985895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.618 [2024-07-15 20:24:12.985902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.618 [2024-07-15 20:24:12.986117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.618 [2024-07-15 20:24:12.986338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.618 [2024-07-15 20:24:12.986349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.618 [2024-07-15 20:24:12.986356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.618 [2024-07-15 20:24:12.989850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.618 [2024-07-15 20:24:12.999134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.618 [2024-07-15 20:24:12.999766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.618 [2024-07-15 20:24:12.999780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.618 [2024-07-15 20:24:12.999787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.618 [2024-07-15 20:24:13.000002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.618 [2024-07-15 20:24:13.000224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.618 [2024-07-15 20:24:13.000233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.618 [2024-07-15 20:24:13.000240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.618 [2024-07-15 20:24:13.003733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.618 [2024-07-15 20:24:13.013018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.618 [2024-07-15 20:24:13.013608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.618 [2024-07-15 20:24:13.013624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.618 [2024-07-15 20:24:13.013631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.618 [2024-07-15 20:24:13.013846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.618 [2024-07-15 20:24:13.014062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.618 [2024-07-15 20:24:13.014070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.618 [2024-07-15 20:24:13.014077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.618 [2024-07-15 20:24:13.017578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.618 [2024-07-15 20:24:13.026857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.618 [2024-07-15 20:24:13.027483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.618 [2024-07-15 20:24:13.027498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.618 [2024-07-15 20:24:13.027505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.618 [2024-07-15 20:24:13.027720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.618 [2024-07-15 20:24:13.027937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.618 [2024-07-15 20:24:13.027945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.618 [2024-07-15 20:24:13.027952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.618 [2024-07-15 20:24:13.031452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.618 [2024-07-15 20:24:13.040747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.618 [2024-07-15 20:24:13.041250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.618 [2024-07-15 20:24:13.041268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.618 [2024-07-15 20:24:13.041276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.618 [2024-07-15 20:24:13.041493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.618 [2024-07-15 20:24:13.041708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.618 [2024-07-15 20:24:13.041717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.618 [2024-07-15 20:24:13.041724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.618 [2024-07-15 20:24:13.045224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.881 [2024-07-15 20:24:13.054514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.881 [2024-07-15 20:24:13.055178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.881 [2024-07-15 20:24:13.055194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.881 [2024-07-15 20:24:13.055201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.881 [2024-07-15 20:24:13.055417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.881 [2024-07-15 20:24:13.055633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.881 [2024-07-15 20:24:13.055641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.881 [2024-07-15 20:24:13.055647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.881 [2024-07-15 20:24:13.059150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.881 [2024-07-15 20:24:13.068433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.881 [2024-07-15 20:24:13.069050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.881 [2024-07-15 20:24:13.069065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.881 [2024-07-15 20:24:13.069072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.881 [2024-07-15 20:24:13.069377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.881 [2024-07-15 20:24:13.069595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.881 [2024-07-15 20:24:13.069602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.881 [2024-07-15 20:24:13.069609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.881 [2024-07-15 20:24:13.073107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.881 [2024-07-15 20:24:13.082184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.881 [2024-07-15 20:24:13.082801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.881 [2024-07-15 20:24:13.082815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.881 [2024-07-15 20:24:13.082822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.881 [2024-07-15 20:24:13.083042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.881 [2024-07-15 20:24:13.083263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.881 [2024-07-15 20:24:13.083272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.881 [2024-07-15 20:24:13.083278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.881 [2024-07-15 20:24:13.086773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.881 [2024-07-15 20:24:13.096057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.881 [2024-07-15 20:24:13.096684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.881 [2024-07-15 20:24:13.096699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.881 [2024-07-15 20:24:13.096706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.881 [2024-07-15 20:24:13.096922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.881 [2024-07-15 20:24:13.097143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.881 [2024-07-15 20:24:13.097151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.881 [2024-07-15 20:24:13.097158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.881 [2024-07-15 20:24:13.100653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.881 [2024-07-15 20:24:13.109935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.881 [2024-07-15 20:24:13.110583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.881 [2024-07-15 20:24:13.110598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.881 [2024-07-15 20:24:13.110605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.881 [2024-07-15 20:24:13.110820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.881 [2024-07-15 20:24:13.111037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.881 [2024-07-15 20:24:13.111045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.881 [2024-07-15 20:24:13.111051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.881 [2024-07-15 20:24:13.114552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.881 [2024-07-15 20:24:13.123834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.881 [2024-07-15 20:24:13.124466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.881 [2024-07-15 20:24:13.124482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.881 [2024-07-15 20:24:13.124489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.881 [2024-07-15 20:24:13.124704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.881 [2024-07-15 20:24:13.124920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.881 [2024-07-15 20:24:13.124929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.881 [2024-07-15 20:24:13.124939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.881 [2024-07-15 20:24:13.128468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.881 [2024-07-15 20:24:13.137748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.881 [2024-07-15 20:24:13.138370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.882 [2024-07-15 20:24:13.138386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.882 [2024-07-15 20:24:13.138393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.882 [2024-07-15 20:24:13.138609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.882 [2024-07-15 20:24:13.138824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.882 [2024-07-15 20:24:13.138832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.882 [2024-07-15 20:24:13.138839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.882 [2024-07-15 20:24:13.142340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.882 [2024-07-15 20:24:13.151628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.882 [2024-07-15 20:24:13.152282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.882 [2024-07-15 20:24:13.152297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.882 [2024-07-15 20:24:13.152304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.882 [2024-07-15 20:24:13.152520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.882 [2024-07-15 20:24:13.152736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.882 [2024-07-15 20:24:13.152744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.882 [2024-07-15 20:24:13.152751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.882 [2024-07-15 20:24:13.156246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.882 [2024-07-15 20:24:13.165526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.882 [2024-07-15 20:24:13.166141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.882 [2024-07-15 20:24:13.166156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.882 [2024-07-15 20:24:13.166163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.882 [2024-07-15 20:24:13.166379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.882 [2024-07-15 20:24:13.166594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.882 [2024-07-15 20:24:13.166603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.882 [2024-07-15 20:24:13.166610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.882 [2024-07-15 20:24:13.170103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.882 [2024-07-15 20:24:13.179390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.882 [2024-07-15 20:24:13.180008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.882 [2024-07-15 20:24:13.180023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.882 [2024-07-15 20:24:13.180030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.882 [2024-07-15 20:24:13.180250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.882 [2024-07-15 20:24:13.180466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.882 [2024-07-15 20:24:13.180474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.882 [2024-07-15 20:24:13.180480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.882 [2024-07-15 20:24:13.183976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.882 [2024-07-15 20:24:13.193267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.882 [2024-07-15 20:24:13.193774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.882 [2024-07-15 20:24:13.193788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.882 [2024-07-15 20:24:13.193795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.882 [2024-07-15 20:24:13.194010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.882 [2024-07-15 20:24:13.194231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.882 [2024-07-15 20:24:13.194239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.882 [2024-07-15 20:24:13.194246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.882 [2024-07-15 20:24:13.197739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.882 [2024-07-15 20:24:13.207020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.882 [2024-07-15 20:24:13.207642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.882 [2024-07-15 20:24:13.207657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.882 [2024-07-15 20:24:13.207664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.882 [2024-07-15 20:24:13.207880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.882 [2024-07-15 20:24:13.208095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.882 [2024-07-15 20:24:13.208103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.882 [2024-07-15 20:24:13.208109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.882 [2024-07-15 20:24:13.211607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.882 [2024-07-15 20:24:13.220886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.882 [2024-07-15 20:24:13.221507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.882 [2024-07-15 20:24:13.221522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.882 [2024-07-15 20:24:13.221529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.882 [2024-07-15 20:24:13.221745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.882 [2024-07-15 20:24:13.221964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.882 [2024-07-15 20:24:13.221972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.882 [2024-07-15 20:24:13.221978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.882 [2024-07-15 20:24:13.225478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.882 [2024-07-15 20:24:13.234763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.882 [2024-07-15 20:24:13.235375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.882 [2024-07-15 20:24:13.235391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.882 [2024-07-15 20:24:13.235399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.882 [2024-07-15 20:24:13.235615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.882 [2024-07-15 20:24:13.235830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.882 [2024-07-15 20:24:13.235838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.882 [2024-07-15 20:24:13.235845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.882 [2024-07-15 20:24:13.239345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.882 [2024-07-15 20:24:13.248625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.882 [2024-07-15 20:24:13.249157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.882 [2024-07-15 20:24:13.249172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.882 [2024-07-15 20:24:13.249179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.882 [2024-07-15 20:24:13.249394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.882 [2024-07-15 20:24:13.249610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.882 [2024-07-15 20:24:13.249618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.882 [2024-07-15 20:24:13.249624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.882 [2024-07-15 20:24:13.253131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.882 [2024-07-15 20:24:13.262414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.882 [2024-07-15 20:24:13.263038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.882 [2024-07-15 20:24:13.263052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.882 [2024-07-15 20:24:13.263060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.882 [2024-07-15 20:24:13.263281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.882 [2024-07-15 20:24:13.263498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.882 [2024-07-15 20:24:13.263505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.882 [2024-07-15 20:24:13.263512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.883 [2024-07-15 20:24:13.267011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.883 [2024-07-15 20:24:13.276296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.883 [2024-07-15 20:24:13.276959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.883 [2024-07-15 20:24:13.276973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.883 [2024-07-15 20:24:13.276980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.883 [2024-07-15 20:24:13.277201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.883 [2024-07-15 20:24:13.277417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.883 [2024-07-15 20:24:13.277424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.883 [2024-07-15 20:24:13.277431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.883 [2024-07-15 20:24:13.280928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.883 [2024-07-15 20:24:13.290211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.883 [2024-07-15 20:24:13.290830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.883 [2024-07-15 20:24:13.290845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.883 [2024-07-15 20:24:13.290852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.883 [2024-07-15 20:24:13.291067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.883 [2024-07-15 20:24:13.291288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.883 [2024-07-15 20:24:13.291299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.883 [2024-07-15 20:24:13.291306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.883 [2024-07-15 20:24:13.294797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.883 [2024-07-15 20:24:13.304064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.883 [2024-07-15 20:24:13.304602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.883 [2024-07-15 20:24:13.304617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:15.883 [2024-07-15 20:24:13.304624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:15.883 [2024-07-15 20:24:13.304839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:15.883 [2024-07-15 20:24:13.305055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.883 [2024-07-15 20:24:13.305062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.883 [2024-07-15 20:24:13.305069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.883 [2024-07-15 20:24:13.308566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.144 [2024-07-15 20:24:13.317842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.144 [2024-07-15 20:24:13.318461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.144 [2024-07-15 20:24:13.318476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.144 [2024-07-15 20:24:13.318487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.144 [2024-07-15 20:24:13.318703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.144 [2024-07-15 20:24:13.318919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.144 [2024-07-15 20:24:13.318926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.144 [2024-07-15 20:24:13.318933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.144 [2024-07-15 20:24:13.322428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.144 [2024-07-15 20:24:13.331700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.144 [2024-07-15 20:24:13.332322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.144 [2024-07-15 20:24:13.332336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.144 [2024-07-15 20:24:13.332344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.144 [2024-07-15 20:24:13.332559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.144 [2024-07-15 20:24:13.332775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.144 [2024-07-15 20:24:13.332782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.144 [2024-07-15 20:24:13.332789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.144 [2024-07-15 20:24:13.336285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.144 [2024-07-15 20:24:13.345548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.144 [2024-07-15 20:24:13.346331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.144 [2024-07-15 20:24:13.346368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.144 [2024-07-15 20:24:13.346378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.144 [2024-07-15 20:24:13.346615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.144 [2024-07-15 20:24:13.346834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.144 [2024-07-15 20:24:13.346843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.144 [2024-07-15 20:24:13.346850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.144 [2024-07-15 20:24:13.350356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.144 [2024-07-15 20:24:13.359433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.144 [2024-07-15 20:24:13.359772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.144 [2024-07-15 20:24:13.359795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.144 [2024-07-15 20:24:13.359804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.144 [2024-07-15 20:24:13.360024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.144 [2024-07-15 20:24:13.360249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.144 [2024-07-15 20:24:13.360262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.144 [2024-07-15 20:24:13.360269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.144 [2024-07-15 20:24:13.363763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.144 [2024-07-15 20:24:13.373231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.144 [2024-07-15 20:24:13.373918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.144 [2024-07-15 20:24:13.373955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.144 [2024-07-15 20:24:13.373965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.144 [2024-07-15 20:24:13.374210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.144 [2024-07-15 20:24:13.374431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.144 [2024-07-15 20:24:13.374440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.144 [2024-07-15 20:24:13.374447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.144 [2024-07-15 20:24:13.377944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.144 [2024-07-15 20:24:13.387010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.144 [2024-07-15 20:24:13.387723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.144 [2024-07-15 20:24:13.387759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.144 [2024-07-15 20:24:13.387769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.144 [2024-07-15 20:24:13.388005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.144 [2024-07-15 20:24:13.388233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.144 [2024-07-15 20:24:13.388243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.144 [2024-07-15 20:24:13.388250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.145 [2024-07-15 20:24:13.391750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.145 [2024-07-15 20:24:13.400822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.145 [2024-07-15 20:24:13.401476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.145 [2024-07-15 20:24:13.401512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.145 [2024-07-15 20:24:13.401523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.145 [2024-07-15 20:24:13.401759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.145 [2024-07-15 20:24:13.401979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.145 [2024-07-15 20:24:13.401987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.145 [2024-07-15 20:24:13.401995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.145 [2024-07-15 20:24:13.405500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.145 [2024-07-15 20:24:13.414569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.145 [2024-07-15 20:24:13.415295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.145 [2024-07-15 20:24:13.415332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.145 [2024-07-15 20:24:13.415342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.145 [2024-07-15 20:24:13.415578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.145 [2024-07-15 20:24:13.415798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.145 [2024-07-15 20:24:13.415806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.145 [2024-07-15 20:24:13.415813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.145 [2024-07-15 20:24:13.419323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.145 [2024-07-15 20:24:13.428388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.145 [2024-07-15 20:24:13.429018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.145 [2024-07-15 20:24:13.429036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.145 [2024-07-15 20:24:13.429044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.145 [2024-07-15 20:24:13.429266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.145 [2024-07-15 20:24:13.429483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.145 [2024-07-15 20:24:13.429491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.145 [2024-07-15 20:24:13.429497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.145 [2024-07-15 20:24:13.432988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.145 [2024-07-15 20:24:13.442254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.145 [2024-07-15 20:24:13.443003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.145 [2024-07-15 20:24:13.443039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.145 [2024-07-15 20:24:13.443049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.145 [2024-07-15 20:24:13.443295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.145 [2024-07-15 20:24:13.443516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.145 [2024-07-15 20:24:13.443524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.145 [2024-07-15 20:24:13.443531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.145 [2024-07-15 20:24:13.447027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.145 [2024-07-15 20:24:13.456047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.145 [2024-07-15 20:24:13.456773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.145 [2024-07-15 20:24:13.456809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.145 [2024-07-15 20:24:13.456820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.145 [2024-07-15 20:24:13.457060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.145 [2024-07-15 20:24:13.457290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.145 [2024-07-15 20:24:13.457299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.145 [2024-07-15 20:24:13.457307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.145 [2024-07-15 20:24:13.460804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.145 [2024-07-15 20:24:13.469863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.145 [2024-07-15 20:24:13.470589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.145 [2024-07-15 20:24:13.470625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.145 [2024-07-15 20:24:13.470636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.145 [2024-07-15 20:24:13.470872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.145 [2024-07-15 20:24:13.471092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.145 [2024-07-15 20:24:13.471100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.145 [2024-07-15 20:24:13.471108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.145 [2024-07-15 20:24:13.474611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.145 [2024-07-15 20:24:13.483674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.145 [2024-07-15 20:24:13.484365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.145 [2024-07-15 20:24:13.484384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.145 [2024-07-15 20:24:13.484392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.145 [2024-07-15 20:24:13.484608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.145 [2024-07-15 20:24:13.484824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.145 [2024-07-15 20:24:13.484832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.145 [2024-07-15 20:24:13.484838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.145 [2024-07-15 20:24:13.488337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.145 [2024-07-15 20:24:13.497603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.145 [2024-07-15 20:24:13.498265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.145 [2024-07-15 20:24:13.498281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.145 [2024-07-15 20:24:13.498288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.145 [2024-07-15 20:24:13.498504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.145 [2024-07-15 20:24:13.498719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.145 [2024-07-15 20:24:13.498726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.145 [2024-07-15 20:24:13.498738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.145 [2024-07-15 20:24:13.502233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.145 [2024-07-15 20:24:13.511495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.145 [2024-07-15 20:24:13.512182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.145 [2024-07-15 20:24:13.512218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.146 [2024-07-15 20:24:13.512230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.146 [2024-07-15 20:24:13.512469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.146 [2024-07-15 20:24:13.512689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.146 [2024-07-15 20:24:13.512698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.146 [2024-07-15 20:24:13.512705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.146 [2024-07-15 20:24:13.516213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.146 [2024-07-15 20:24:13.525276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.146 [2024-07-15 20:24:13.526032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.146 [2024-07-15 20:24:13.526067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.146 [2024-07-15 20:24:13.526078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.146 [2024-07-15 20:24:13.526322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.146 [2024-07-15 20:24:13.526543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.146 [2024-07-15 20:24:13.526551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.146 [2024-07-15 20:24:13.526559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.146 [2024-07-15 20:24:13.530054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.146 [2024-07-15 20:24:13.539137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.146 [2024-07-15 20:24:13.539857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.146 [2024-07-15 20:24:13.539893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.146 [2024-07-15 20:24:13.539904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.146 [2024-07-15 20:24:13.540149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.146 [2024-07-15 20:24:13.540370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.146 [2024-07-15 20:24:13.540378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.146 [2024-07-15 20:24:13.540385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.146 [2024-07-15 20:24:13.543883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.146 [2024-07-15 20:24:13.552958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.146 [2024-07-15 20:24:13.553523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.146 [2024-07-15 20:24:13.553541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.146 [2024-07-15 20:24:13.553549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.146 [2024-07-15 20:24:13.553766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.146 [2024-07-15 20:24:13.553982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.146 [2024-07-15 20:24:13.553989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.146 [2024-07-15 20:24:13.553996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.146 [2024-07-15 20:24:13.557494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.146 [2024-07-15 20:24:13.566762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.146 [2024-07-15 20:24:13.567512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.146 [2024-07-15 20:24:13.567549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.146 [2024-07-15 20:24:13.567559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.146 [2024-07-15 20:24:13.567795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.146 [2024-07-15 20:24:13.568015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.146 [2024-07-15 20:24:13.568023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.146 [2024-07-15 20:24:13.568031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.146 [2024-07-15 20:24:13.571541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.407 [2024-07-15 20:24:13.580638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.407 [2024-07-15 20:24:13.581156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-07-15 20:24:13.581192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.407 [2024-07-15 20:24:13.581202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.407 [2024-07-15 20:24:13.581438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.407 [2024-07-15 20:24:13.581658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.407 [2024-07-15 20:24:13.581666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.407 [2024-07-15 20:24:13.581674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.407 [2024-07-15 20:24:13.585186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.407 [2024-07-15 20:24:13.594461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.407 [2024-07-15 20:24:13.595216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-07-15 20:24:13.595253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.407 [2024-07-15 20:24:13.595265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.407 [2024-07-15 20:24:13.595507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.407 [2024-07-15 20:24:13.595727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.407 [2024-07-15 20:24:13.595736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.407 [2024-07-15 20:24:13.595743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.407 [2024-07-15 20:24:13.599251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.407 [2024-07-15 20:24:13.608324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.407 [2024-07-15 20:24:13.608997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-07-15 20:24:13.609033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.408 [2024-07-15 20:24:13.609044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.408 [2024-07-15 20:24:13.609289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.408 [2024-07-15 20:24:13.609511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.408 [2024-07-15 20:24:13.609519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.408 [2024-07-15 20:24:13.609526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.408 [2024-07-15 20:24:13.613019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.408 [2024-07-15 20:24:13.622084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.408 [2024-07-15 20:24:13.622844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-07-15 20:24:13.622881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.408 [2024-07-15 20:24:13.622891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.408 [2024-07-15 20:24:13.623136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.408 [2024-07-15 20:24:13.623358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.408 [2024-07-15 20:24:13.623366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.408 [2024-07-15 20:24:13.623373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.408 [2024-07-15 20:24:13.626878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.408 [2024-07-15 20:24:13.635954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.408 [2024-07-15 20:24:13.636719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-07-15 20:24:13.636755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.408 [2024-07-15 20:24:13.636766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.408 [2024-07-15 20:24:13.637002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.408 [2024-07-15 20:24:13.637229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.408 [2024-07-15 20:24:13.637238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.408 [2024-07-15 20:24:13.637246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.408 [2024-07-15 20:24:13.640751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.408 [2024-07-15 20:24:13.649823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.408 [2024-07-15 20:24:13.650443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-07-15 20:24:13.650462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.408 [2024-07-15 20:24:13.650470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.408 [2024-07-15 20:24:13.650687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.408 [2024-07-15 20:24:13.650902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.408 [2024-07-15 20:24:13.650910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.408 [2024-07-15 20:24:13.650917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.408 [2024-07-15 20:24:13.654423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.408 [2024-07-15 20:24:13.663701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.408 [2024-07-15 20:24:13.664324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-07-15 20:24:13.664340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.408 [2024-07-15 20:24:13.664348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.408 [2024-07-15 20:24:13.664564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.408 [2024-07-15 20:24:13.664780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.408 [2024-07-15 20:24:13.664787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.408 [2024-07-15 20:24:13.664794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.408 [2024-07-15 20:24:13.668291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.408 [2024-07-15 20:24:13.677560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.408 [2024-07-15 20:24:13.678090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-07-15 20:24:13.678105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.408 [2024-07-15 20:24:13.678112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.408 [2024-07-15 20:24:13.678332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.408 [2024-07-15 20:24:13.678548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.408 [2024-07-15 20:24:13.678556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.408 [2024-07-15 20:24:13.678563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.408 [2024-07-15 20:24:13.682056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.408 [2024-07-15 20:24:13.691320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.408 [2024-07-15 20:24:13.692025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-07-15 20:24:13.692065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.408 [2024-07-15 20:24:13.692077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.408 [2024-07-15 20:24:13.692324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.408 [2024-07-15 20:24:13.692545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.408 [2024-07-15 20:24:13.692554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.408 [2024-07-15 20:24:13.692561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.408 [2024-07-15 20:24:13.696056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.408 [2024-07-15 20:24:13.705120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.408 [2024-07-15 20:24:13.705852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-07-15 20:24:13.705888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.408 [2024-07-15 20:24:13.705898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.408 [2024-07-15 20:24:13.706144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.408 [2024-07-15 20:24:13.706365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.408 [2024-07-15 20:24:13.706374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.408 [2024-07-15 20:24:13.706381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.408 [2024-07-15 20:24:13.709878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.408 [2024-07-15 20:24:13.718942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.408 [2024-07-15 20:24:13.719663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-07-15 20:24:13.719699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.408 [2024-07-15 20:24:13.719710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.408 [2024-07-15 20:24:13.719946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.408 [2024-07-15 20:24:13.720175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.408 [2024-07-15 20:24:13.720185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.408 [2024-07-15 20:24:13.720192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.408 [2024-07-15 20:24:13.723694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.408 [2024-07-15 20:24:13.732758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.408 [2024-07-15 20:24:13.733511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-07-15 20:24:13.733548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.408 [2024-07-15 20:24:13.733558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.408 [2024-07-15 20:24:13.733794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.408 [2024-07-15 20:24:13.734019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.408 [2024-07-15 20:24:13.734028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.408 [2024-07-15 20:24:13.734035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.408 [2024-07-15 20:24:13.737541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.408 [2024-07-15 20:24:13.746616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.408 [2024-07-15 20:24:13.747391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-07-15 20:24:13.747427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.408 [2024-07-15 20:24:13.747438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.408 [2024-07-15 20:24:13.747674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.408 [2024-07-15 20:24:13.747894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.408 [2024-07-15 20:24:13.747903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.408 [2024-07-15 20:24:13.747910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.409 [2024-07-15 20:24:13.751412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.409 [2024-07-15 20:24:13.760488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.409 [2024-07-15 20:24:13.761265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-15 20:24:13.761301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.409 [2024-07-15 20:24:13.761311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.409 [2024-07-15 20:24:13.761547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.409 [2024-07-15 20:24:13.761768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.409 [2024-07-15 20:24:13.761776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.409 [2024-07-15 20:24:13.761783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.409 [2024-07-15 20:24:13.765289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.409 [2024-07-15 20:24:13.774356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.409 [2024-07-15 20:24:13.774999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-15 20:24:13.775036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.409 [2024-07-15 20:24:13.775047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.409 [2024-07-15 20:24:13.775293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.409 [2024-07-15 20:24:13.775514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.409 [2024-07-15 20:24:13.775522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.409 [2024-07-15 20:24:13.775530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.409 [2024-07-15 20:24:13.779024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.409 [2024-07-15 20:24:13.788098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.409 [2024-07-15 20:24:13.788871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-15 20:24:13.788908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.409 [2024-07-15 20:24:13.788918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.409 [2024-07-15 20:24:13.789165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.409 [2024-07-15 20:24:13.789386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.409 [2024-07-15 20:24:13.789395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.409 [2024-07-15 20:24:13.789402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.409 [2024-07-15 20:24:13.792903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.409 [2024-07-15 20:24:13.801987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.409 [2024-07-15 20:24:13.802688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-15 20:24:13.802725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.409 [2024-07-15 20:24:13.802735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.409 [2024-07-15 20:24:13.802971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.409 [2024-07-15 20:24:13.803199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.409 [2024-07-15 20:24:13.803208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.409 [2024-07-15 20:24:13.803215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.409 [2024-07-15 20:24:13.806713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.409 [2024-07-15 20:24:13.815777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.409 [2024-07-15 20:24:13.816421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-15 20:24:13.816440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.409 [2024-07-15 20:24:13.816448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.409 [2024-07-15 20:24:13.816664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.409 [2024-07-15 20:24:13.816880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.409 [2024-07-15 20:24:13.816888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.409 [2024-07-15 20:24:13.816895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.409 [2024-07-15 20:24:13.820388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.409 [2024-07-15 20:24:13.829655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.409 [2024-07-15 20:24:13.830396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-15 20:24:13.830432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.409 [2024-07-15 20:24:13.830447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.409 [2024-07-15 20:24:13.830683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.409 [2024-07-15 20:24:13.830903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.409 [2024-07-15 20:24:13.830911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.409 [2024-07-15 20:24:13.830919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.409 [2024-07-15 20:24:13.834423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.671 [2024-07-15 20:24:13.843501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.671 [2024-07-15 20:24:13.844200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.671 [2024-07-15 20:24:13.844237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.671 [2024-07-15 20:24:13.844248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.671 [2024-07-15 20:24:13.844484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.671 [2024-07-15 20:24:13.844703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.671 [2024-07-15 20:24:13.844712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.671 [2024-07-15 20:24:13.844719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.671 [2024-07-15 20:24:13.848229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.671 [2024-07-15 20:24:13.857316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.671 [2024-07-15 20:24:13.858069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.671 [2024-07-15 20:24:13.858105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.671 [2024-07-15 20:24:13.858118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.671 [2024-07-15 20:24:13.858365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.671 [2024-07-15 20:24:13.858586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.671 [2024-07-15 20:24:13.858594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.671 [2024-07-15 20:24:13.858602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.671 [2024-07-15 20:24:13.862103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.671 [2024-07-15 20:24:13.871187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.671 [2024-07-15 20:24:13.871813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.671 [2024-07-15 20:24:13.871831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.671 [2024-07-15 20:24:13.871839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.671 [2024-07-15 20:24:13.872056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.671 [2024-07-15 20:24:13.872279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.671 [2024-07-15 20:24:13.872288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.671 [2024-07-15 20:24:13.872303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.671 [2024-07-15 20:24:13.875798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.671 [2024-07-15 20:24:13.885078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.671 [2024-07-15 20:24:13.885705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.671 [2024-07-15 20:24:13.885721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.671 [2024-07-15 20:24:13.885728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.671 [2024-07-15 20:24:13.885944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.671 [2024-07-15 20:24:13.886166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.671 [2024-07-15 20:24:13.886174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.671 [2024-07-15 20:24:13.886181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.671 [2024-07-15 20:24:13.889673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.671 [2024-07-15 20:24:13.898951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.671 [2024-07-15 20:24:13.899594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.671 [2024-07-15 20:24:13.899610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.671 [2024-07-15 20:24:13.899617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.671 [2024-07-15 20:24:13.899832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.671 [2024-07-15 20:24:13.900048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.671 [2024-07-15 20:24:13.900056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.671 [2024-07-15 20:24:13.900063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.671 [2024-07-15 20:24:13.903562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.671 [2024-07-15 20:24:13.912841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.671 [2024-07-15 20:24:13.913474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.671 [2024-07-15 20:24:13.913489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.671 [2024-07-15 20:24:13.913497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.671 [2024-07-15 20:24:13.913712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.671 [2024-07-15 20:24:13.913928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.671 [2024-07-15 20:24:13.913935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.671 [2024-07-15 20:24:13.913942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.671 [2024-07-15 20:24:13.917445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.671 [2024-07-15 20:24:13.926727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.671 [2024-07-15 20:24:13.927234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.671 [2024-07-15 20:24:13.927249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.671 [2024-07-15 20:24:13.927256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.671 [2024-07-15 20:24:13.927472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.671 [2024-07-15 20:24:13.927687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.671 [2024-07-15 20:24:13.927695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.671 [2024-07-15 20:24:13.927702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.671 [2024-07-15 20:24:13.931202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.671 [2024-07-15 20:24:13.940483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.671 [2024-07-15 20:24:13.941145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.671 [2024-07-15 20:24:13.941161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.671 [2024-07-15 20:24:13.941169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.672 [2024-07-15 20:24:13.941384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.672 [2024-07-15 20:24:13.941600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.672 [2024-07-15 20:24:13.941608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.672 [2024-07-15 20:24:13.941614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.672 [2024-07-15 20:24:13.945108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.672 [2024-07-15 20:24:13.954397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.672 [2024-07-15 20:24:13.955049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.672 [2024-07-15 20:24:13.955063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.672 [2024-07-15 20:24:13.955071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.672 [2024-07-15 20:24:13.955291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.672 [2024-07-15 20:24:13.955507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.672 [2024-07-15 20:24:13.955515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.672 [2024-07-15 20:24:13.955522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.672 [2024-07-15 20:24:13.959015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.672 [2024-07-15 20:24:13.968295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.672 [2024-07-15 20:24:13.968910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.672 [2024-07-15 20:24:13.968925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.672 [2024-07-15 20:24:13.968932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.672 [2024-07-15 20:24:13.969156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.672 [2024-07-15 20:24:13.969373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.672 [2024-07-15 20:24:13.969381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.672 [2024-07-15 20:24:13.969388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.672 [2024-07-15 20:24:13.972880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.672 [2024-07-15 20:24:13.982163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.672 [2024-07-15 20:24:13.982780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.672 [2024-07-15 20:24:13.982794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.672 [2024-07-15 20:24:13.982801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.672 [2024-07-15 20:24:13.983017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.672 [2024-07-15 20:24:13.983238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.672 [2024-07-15 20:24:13.983246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.672 [2024-07-15 20:24:13.983253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.672 [2024-07-15 20:24:13.986746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1169660 Killed "${NVMF_APP[@]}" "$@" 00:29:16.672 20:24:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:16.672 20:24:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:16.672 20:24:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:16.672 20:24:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:16.672 20:24:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.672 [2024-07-15 20:24:13.996022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.672 [2024-07-15 20:24:13.996607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.672 [2024-07-15 20:24:13.996643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.672 [2024-07-15 20:24:13.996654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.672 [2024-07-15 20:24:13.996890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.672 [2024-07-15 20:24:13.997110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.672 [2024-07-15 20:24:13.997119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.672 [2024-07-15 20:24:13.997134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.672 [2024-07-15 20:24:14.000631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.672 20:24:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1171804 00:29:16.672 20:24:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1171804 00:29:16.672 20:24:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:16.672 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1171804 ']' 00:29:16.672 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.672 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:16.672 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.672 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:16.672 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.672 [2024-07-15 20:24:14.009913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.672 [2024-07-15 20:24:14.010577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.672 [2024-07-15 20:24:14.010596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.672 [2024-07-15 20:24:14.010605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.672 [2024-07-15 20:24:14.010822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.672 [2024-07-15 20:24:14.011038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.672 [2024-07-15 20:24:14.011046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.672 [2024-07-15 20:24:14.011054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.672 [2024-07-15 20:24:14.014551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.672 [2024-07-15 20:24:14.023813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.672 [2024-07-15 20:24:14.024431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.672 [2024-07-15 20:24:14.024447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.672 [2024-07-15 20:24:14.024454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.672 [2024-07-15 20:24:14.024670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.672 [2024-07-15 20:24:14.024886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.672 [2024-07-15 20:24:14.024894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.672 [2024-07-15 20:24:14.024901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.672 [2024-07-15 20:24:14.028398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.672 [2024-07-15 20:24:14.037864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.672 [2024-07-15 20:24:14.038527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.672 [2024-07-15 20:24:14.038544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.672 [2024-07-15 20:24:14.038551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.672 [2024-07-15 20:24:14.038768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.672 [2024-07-15 20:24:14.038984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.672 [2024-07-15 20:24:14.038992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.672 [2024-07-15 20:24:14.038999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.672 [2024-07-15 20:24:14.042503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.672 [2024-07-15 20:24:14.051779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.672 [2024-07-15 20:24:14.052490] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:29:16.672 [2024-07-15 20:24:14.052519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.672 [2024-07-15 20:24:14.052535] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.672 [2024-07-15 20:24:14.052555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.672 [2024-07-15 20:24:14.052566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.672 [2024-07-15 20:24:14.052802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.672 [2024-07-15 20:24:14.053022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.672 [2024-07-15 20:24:14.053030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.672 [2024-07-15 20:24:14.053038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.672 [2024-07-15 20:24:14.056553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.672 [2024-07-15 20:24:14.065623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.672 [2024-07-15 20:24:14.066274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.672 [2024-07-15 20:24:14.066293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.672 [2024-07-15 20:24:14.066301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.672 [2024-07-15 20:24:14.066518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.672 [2024-07-15 20:24:14.066734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.672 [2024-07-15 20:24:14.066743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.672 [2024-07-15 20:24:14.066750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.672 [2024-07-15 20:24:14.070240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.672 [2024-07-15 20:24:14.079510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.672 [2024-07-15 20:24:14.080167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.672 [2024-07-15 20:24:14.080190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.672 [2024-07-15 20:24:14.080198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.672 [2024-07-15 20:24:14.080419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.672 [2024-07-15 20:24:14.080636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.672 [2024-07-15 20:24:14.080644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.672 [2024-07-15 20:24:14.080651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.672 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.673 [2024-07-15 20:24:14.084151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.673 [2024-07-15 20:24:14.093423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.673 [2024-07-15 20:24:14.094177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.673 [2024-07-15 20:24:14.094214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.673 [2024-07-15 20:24:14.094226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.673 [2024-07-15 20:24:14.094465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.673 [2024-07-15 20:24:14.094685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.673 [2024-07-15 20:24:14.094694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.673 [2024-07-15 20:24:14.094702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.673 [2024-07-15 20:24:14.098207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.935 [2024-07-15 20:24:14.107365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.935 [2024-07-15 20:24:14.108052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.935 [2024-07-15 20:24:14.108071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.935 [2024-07-15 20:24:14.108079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.935 [2024-07-15 20:24:14.108301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.935 [2024-07-15 20:24:14.108518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.935 [2024-07-15 20:24:14.108526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.935 [2024-07-15 20:24:14.108533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.935 [2024-07-15 20:24:14.112023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.935 [2024-07-15 20:24:14.121296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.935 [2024-07-15 20:24:14.122045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.935 [2024-07-15 20:24:14.122082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.935 [2024-07-15 20:24:14.122093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.935 [2024-07-15 20:24:14.122336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.935 [2024-07-15 20:24:14.122557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.935 [2024-07-15 20:24:14.122566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.935 [2024-07-15 20:24:14.122573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.935 [2024-07-15 20:24:14.126072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.935 [2024-07-15 20:24:14.132494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:16.935 [2024-07-15 20:24:14.135147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.935 [2024-07-15 20:24:14.135793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.935 [2024-07-15 20:24:14.135812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.935 [2024-07-15 20:24:14.135824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.935 [2024-07-15 20:24:14.136042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.935 [2024-07-15 20:24:14.136264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.935 [2024-07-15 20:24:14.136272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.935 [2024-07-15 20:24:14.136279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.935 [2024-07-15 20:24:14.139771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.935 [2024-07-15 20:24:14.149050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.935 [2024-07-15 20:24:14.149685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.935 [2024-07-15 20:24:14.149701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.935 [2024-07-15 20:24:14.149709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.935 [2024-07-15 20:24:14.149926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.935 [2024-07-15 20:24:14.150146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.935 [2024-07-15 20:24:14.150155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.935 [2024-07-15 20:24:14.150162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.935 [2024-07-15 20:24:14.153668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.935 [2024-07-15 20:24:14.162940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.935 [2024-07-15 20:24:14.163612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.935 [2024-07-15 20:24:14.163628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.935 [2024-07-15 20:24:14.163636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.935 [2024-07-15 20:24:14.163852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.935 [2024-07-15 20:24:14.164068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.935 [2024-07-15 20:24:14.164076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.935 [2024-07-15 20:24:14.164083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.935 [2024-07-15 20:24:14.167580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.935 [2024-07-15 20:24:14.176848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.935 [2024-07-15 20:24:14.177354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.935 [2024-07-15 20:24:14.177371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.935 [2024-07-15 20:24:14.177378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.935 [2024-07-15 20:24:14.177594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.935 [2024-07-15 20:24:14.177811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.935 [2024-07-15 20:24:14.177823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.935 [2024-07-15 20:24:14.177830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.935 [2024-07-15 20:24:14.181327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.935 [2024-07-15 20:24:14.185747] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.935 [2024-07-15 20:24:14.185773] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.935 [2024-07-15 20:24:14.185779] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.935 [2024-07-15 20:24:14.185784] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.935 [2024-07-15 20:24:14.185788] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.935 [2024-07-15 20:24:14.185908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:16.935 [2024-07-15 20:24:14.186067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.935 [2024-07-15 20:24:14.186069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:16.935 [2024-07-15 20:24:14.190599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.935 [2024-07-15 20:24:14.191368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.935 [2024-07-15 20:24:14.191407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.935 [2024-07-15 20:24:14.191418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.935 [2024-07-15 20:24:14.191661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.935 [2024-07-15 20:24:14.191881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.935 [2024-07-15 20:24:14.191890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.935 [2024-07-15 20:24:14.191898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.935 [2024-07-15 20:24:14.195405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.935 [2024-07-15 20:24:14.204476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.935 [2024-07-15 20:24:14.205172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.935 [2024-07-15 20:24:14.205198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.935 [2024-07-15 20:24:14.205207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.935 [2024-07-15 20:24:14.205434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.935 [2024-07-15 20:24:14.205652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.935 [2024-07-15 20:24:14.205659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.935 [2024-07-15 20:24:14.205666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.935 [2024-07-15 20:24:14.209168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.935 [2024-07-15 20:24:14.218235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.935 [2024-07-15 20:24:14.218965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.935 [2024-07-15 20:24:14.219002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.935 [2024-07-15 20:24:14.219018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.936 [2024-07-15 20:24:14.219267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.936 [2024-07-15 20:24:14.219488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.936 [2024-07-15 20:24:14.219496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.936 [2024-07-15 20:24:14.219503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.936 [2024-07-15 20:24:14.222999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.936 [2024-07-15 20:24:14.232071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.936 [2024-07-15 20:24:14.232795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.936 [2024-07-15 20:24:14.232832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.936 [2024-07-15 20:24:14.232843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.936 [2024-07-15 20:24:14.233082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.936 [2024-07-15 20:24:14.233310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.936 [2024-07-15 20:24:14.233319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.936 [2024-07-15 20:24:14.233326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.936 [2024-07-15 20:24:14.236825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.936 [2024-07-15 20:24:14.245894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.936 [2024-07-15 20:24:14.246547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.936 [2024-07-15 20:24:14.246583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.936 [2024-07-15 20:24:14.246593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.936 [2024-07-15 20:24:14.246830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.936 [2024-07-15 20:24:14.247050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.936 [2024-07-15 20:24:14.247059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.936 [2024-07-15 20:24:14.247067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.936 [2024-07-15 20:24:14.250569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.936 [2024-07-15 20:24:14.259654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.936 [2024-07-15 20:24:14.260402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.936 [2024-07-15 20:24:14.260438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.936 [2024-07-15 20:24:14.260449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.936 [2024-07-15 20:24:14.260685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.936 [2024-07-15 20:24:14.260905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.936 [2024-07-15 20:24:14.260918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.936 [2024-07-15 20:24:14.260926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.936 [2024-07-15 20:24:14.264434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.936 [2024-07-15 20:24:14.273508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.936 [2024-07-15 20:24:14.273979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.936 [2024-07-15 20:24:14.273997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.936 [2024-07-15 20:24:14.274005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.936 [2024-07-15 20:24:14.274228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.936 [2024-07-15 20:24:14.274445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.936 [2024-07-15 20:24:14.274452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.936 [2024-07-15 20:24:14.274459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.936 [2024-07-15 20:24:14.277950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.936 [2024-07-15 20:24:14.287429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.936 [2024-07-15 20:24:14.288018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.936 [2024-07-15 20:24:14.288054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.936 [2024-07-15 20:24:14.288065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.936 [2024-07-15 20:24:14.288314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.936 [2024-07-15 20:24:14.288537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.936 [2024-07-15 20:24:14.288545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.936 [2024-07-15 20:24:14.288553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.936 [2024-07-15 20:24:14.292048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.936 [2024-07-15 20:24:14.301327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.936 [2024-07-15 20:24:14.302107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.936 [2024-07-15 20:24:14.302150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.936 [2024-07-15 20:24:14.302162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.936 [2024-07-15 20:24:14.302401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.936 [2024-07-15 20:24:14.302621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.936 [2024-07-15 20:24:14.302630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.936 [2024-07-15 20:24:14.302637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.936 [2024-07-15 20:24:14.306140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.936 [2024-07-15 20:24:14.315215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.936 [2024-07-15 20:24:14.315872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.936 [2024-07-15 20:24:14.315890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.936 [2024-07-15 20:24:14.315898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.936 [2024-07-15 20:24:14.316114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.936 [2024-07-15 20:24:14.316337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.936 [2024-07-15 20:24:14.316346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.936 [2024-07-15 20:24:14.316353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.936 [2024-07-15 20:24:14.319843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.936 [2024-07-15 20:24:14.329111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.936 [2024-07-15 20:24:14.329666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.936 [2024-07-15 20:24:14.329681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.936 [2024-07-15 20:24:14.329688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.936 [2024-07-15 20:24:14.329904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.936 [2024-07-15 20:24:14.330119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.936 [2024-07-15 20:24:14.330133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.936 [2024-07-15 20:24:14.330140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.936 [2024-07-15 20:24:14.333634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.936 [2024-07-15 20:24:14.342907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.936 [2024-07-15 20:24:14.343548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.936 [2024-07-15 20:24:14.343564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.936 [2024-07-15 20:24:14.343574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.936 [2024-07-15 20:24:14.343790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.936 [2024-07-15 20:24:14.344005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.936 [2024-07-15 20:24:14.344013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.936 [2024-07-15 20:24:14.344020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.936 [2024-07-15 20:24:14.347515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.936 [2024-07-15 20:24:14.356793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.936 [2024-07-15 20:24:14.357311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.936 [2024-07-15 20:24:14.357326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:16.936 [2024-07-15 20:24:14.357334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:16.936 [2024-07-15 20:24:14.357553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:16.936 [2024-07-15 20:24:14.357769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.936 [2024-07-15 20:24:14.357777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.936 [2024-07-15 20:24:14.357783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.936 [2024-07-15 20:24:14.361281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.199 [2024-07-15 20:24:14.370553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.199 [2024-07-15 20:24:14.371218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.199 [2024-07-15 20:24:14.371234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.199 [2024-07-15 20:24:14.371242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.199 [2024-07-15 20:24:14.371457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.199 [2024-07-15 20:24:14.371673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.199 [2024-07-15 20:24:14.371682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.199 [2024-07-15 20:24:14.371688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.199 [2024-07-15 20:24:14.375186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.199 [2024-07-15 20:24:14.384460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.199 [2024-07-15 20:24:14.385090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.199 [2024-07-15 20:24:14.385105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.199 [2024-07-15 20:24:14.385114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.199 [2024-07-15 20:24:14.385335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.199 [2024-07-15 20:24:14.385552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.199 [2024-07-15 20:24:14.385560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.199 [2024-07-15 20:24:14.385567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.199 [2024-07-15 20:24:14.389056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.199 [2024-07-15 20:24:14.398354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.199 [2024-07-15 20:24:14.398800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.199 [2024-07-15 20:24:14.398814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.199 [2024-07-15 20:24:14.398821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.199 [2024-07-15 20:24:14.399037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.199 [2024-07-15 20:24:14.399258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.199 [2024-07-15 20:24:14.399267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.199 [2024-07-15 20:24:14.399278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.199 [2024-07-15 20:24:14.402792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.199 [2024-07-15 20:24:14.412270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.199 [2024-07-15 20:24:14.412897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.199 [2024-07-15 20:24:14.412911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.199 [2024-07-15 20:24:14.412918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.199 [2024-07-15 20:24:14.413139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.199 [2024-07-15 20:24:14.413355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.199 [2024-07-15 20:24:14.413363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.199 [2024-07-15 20:24:14.413369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.199 [2024-07-15 20:24:14.416863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.199 [2024-07-15 20:24:14.426141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.199 [2024-07-15 20:24:14.426775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.199 [2024-07-15 20:24:14.426790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.199 [2024-07-15 20:24:14.426797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.199 [2024-07-15 20:24:14.427012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.199 [2024-07-15 20:24:14.427234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.199 [2024-07-15 20:24:14.427243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.199 [2024-07-15 20:24:14.427249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.199 [2024-07-15 20:24:14.430739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.199 [2024-07-15 20:24:14.440012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.199 [2024-07-15 20:24:14.440651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.199 [2024-07-15 20:24:14.440665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.200 [2024-07-15 20:24:14.440672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.200 [2024-07-15 20:24:14.440888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.200 [2024-07-15 20:24:14.441103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.200 [2024-07-15 20:24:14.441111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.200 [2024-07-15 20:24:14.441117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.200 [2024-07-15 20:24:14.444613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.200 [2024-07-15 20:24:14.453890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.200 [2024-07-15 20:24:14.454495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.200 [2024-07-15 20:24:14.454510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.200 [2024-07-15 20:24:14.454517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.200 [2024-07-15 20:24:14.454733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.200 [2024-07-15 20:24:14.454949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.200 [2024-07-15 20:24:14.454957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.200 [2024-07-15 20:24:14.454964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.200 [2024-07-15 20:24:14.458461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.200 [2024-07-15 20:24:14.467732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.200 [2024-07-15 20:24:14.468366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.200 [2024-07-15 20:24:14.468381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.200 [2024-07-15 20:24:14.468388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.200 [2024-07-15 20:24:14.468603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.200 [2024-07-15 20:24:14.468819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.200 [2024-07-15 20:24:14.468827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.200 [2024-07-15 20:24:14.468833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.200 [2024-07-15 20:24:14.472463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.200 [2024-07-15 20:24:14.481533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.200 [2024-07-15 20:24:14.482168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.200 [2024-07-15 20:24:14.482185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.200 [2024-07-15 20:24:14.482192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.200 [2024-07-15 20:24:14.482407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.200 [2024-07-15 20:24:14.482622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.200 [2024-07-15 20:24:14.482631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.200 [2024-07-15 20:24:14.482638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.200 [2024-07-15 20:24:14.486128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.200 [2024-07-15 20:24:14.495395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.200 [2024-07-15 20:24:14.495957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.200 [2024-07-15 20:24:14.495971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.200 [2024-07-15 20:24:14.495978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.200 [2024-07-15 20:24:14.496198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.200 [2024-07-15 20:24:14.496418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.200 [2024-07-15 20:24:14.496426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.200 [2024-07-15 20:24:14.496433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.200 [2024-07-15 20:24:14.499923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.200 [2024-07-15 20:24:14.509192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.200 [2024-07-15 20:24:14.509819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.200 [2024-07-15 20:24:14.509833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.200 [2024-07-15 20:24:14.509841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.200 [2024-07-15 20:24:14.510056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.200 [2024-07-15 20:24:14.510277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.200 [2024-07-15 20:24:14.510285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.200 [2024-07-15 20:24:14.510292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.200 [2024-07-15 20:24:14.513785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.200 [2024-07-15 20:24:14.523056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.200 [2024-07-15 20:24:14.523513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.200 [2024-07-15 20:24:14.523528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.200 [2024-07-15 20:24:14.523535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.200 [2024-07-15 20:24:14.523751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.200 [2024-07-15 20:24:14.523966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.200 [2024-07-15 20:24:14.523975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.200 [2024-07-15 20:24:14.523982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.200 [2024-07-15 20:24:14.527475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.200 [2024-07-15 20:24:14.536950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.200 [2024-07-15 20:24:14.537536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.200 [2024-07-15 20:24:14.537551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.200 [2024-07-15 20:24:14.537558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.200 [2024-07-15 20:24:14.537773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.200 [2024-07-15 20:24:14.537989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.200 [2024-07-15 20:24:14.537996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.200 [2024-07-15 20:24:14.538003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.200 [2024-07-15 20:24:14.541500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.200 [2024-07-15 20:24:14.550774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.200 [2024-07-15 20:24:14.551374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.200 [2024-07-15 20:24:14.551411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.200 [2024-07-15 20:24:14.551421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.200 [2024-07-15 20:24:14.551657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.200 [2024-07-15 20:24:14.551877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.200 [2024-07-15 20:24:14.551886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.200 [2024-07-15 20:24:14.551893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.200 [2024-07-15 20:24:14.555412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.200 [2024-07-15 20:24:14.564689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.200 [2024-07-15 20:24:14.565306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.200 [2024-07-15 20:24:14.565326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.200 [2024-07-15 20:24:14.565333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.200 [2024-07-15 20:24:14.565550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.200 [2024-07-15 20:24:14.565767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.200 [2024-07-15 20:24:14.565774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.200 [2024-07-15 20:24:14.565781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.200 [2024-07-15 20:24:14.569280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.200 [2024-07-15 20:24:14.578554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.200 [2024-07-15 20:24:14.579186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.200 [2024-07-15 20:24:14.579201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.201 [2024-07-15 20:24:14.579209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.201 [2024-07-15 20:24:14.579425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.201 [2024-07-15 20:24:14.579640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.201 [2024-07-15 20:24:14.579648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.201 [2024-07-15 20:24:14.579655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.201 [2024-07-15 20:24:14.583152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.201 [2024-07-15 20:24:14.592419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.201 [2024-07-15 20:24:14.593051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.201 [2024-07-15 20:24:14.593066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.201 [2024-07-15 20:24:14.593078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.201 [2024-07-15 20:24:14.593298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.201 [2024-07-15 20:24:14.593514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.201 [2024-07-15 20:24:14.593522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.201 [2024-07-15 20:24:14.593529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.201 [2024-07-15 20:24:14.597021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.201 [2024-07-15 20:24:14.606301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.201 [2024-07-15 20:24:14.606963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.201 [2024-07-15 20:24:14.606977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.201 [2024-07-15 20:24:14.606984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.201 [2024-07-15 20:24:14.607204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.201 [2024-07-15 20:24:14.607420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.201 [2024-07-15 20:24:14.607427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.201 [2024-07-15 20:24:14.607434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.201 [2024-07-15 20:24:14.610922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.201 [2024-07-15 20:24:14.620221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.201 [2024-07-15 20:24:14.620896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.201 [2024-07-15 20:24:14.620911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.201 [2024-07-15 20:24:14.620918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.201 [2024-07-15 20:24:14.621139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.201 [2024-07-15 20:24:14.621355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.201 [2024-07-15 20:24:14.621362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.201 [2024-07-15 20:24:14.621369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.201 [2024-07-15 20:24:14.624860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.463 [2024-07-15 20:24:14.634132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.463 [2024-07-15 20:24:14.634641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.463 [2024-07-15 20:24:14.634655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.463 [2024-07-15 20:24:14.634662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.463 [2024-07-15 20:24:14.634878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.463 [2024-07-15 20:24:14.635094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.463 [2024-07-15 20:24:14.635109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.463 [2024-07-15 20:24:14.635115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.463 [2024-07-15 20:24:14.638613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.463 [2024-07-15 20:24:14.647886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.463 [2024-07-15 20:24:14.648521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.463 [2024-07-15 20:24:14.648536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.463 [2024-07-15 20:24:14.648543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.463 [2024-07-15 20:24:14.648759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.463 [2024-07-15 20:24:14.648975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.464 [2024-07-15 20:24:14.648982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.464 [2024-07-15 20:24:14.648989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.464 [2024-07-15 20:24:14.652487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.464 [2024-07-15 20:24:14.661771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.464 [2024-07-15 20:24:14.662409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.464 [2024-07-15 20:24:14.662425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.464 [2024-07-15 20:24:14.662432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.464 [2024-07-15 20:24:14.662648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.464 [2024-07-15 20:24:14.662863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.464 [2024-07-15 20:24:14.662872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.464 [2024-07-15 20:24:14.662881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.464 [2024-07-15 20:24:14.666378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.464 [2024-07-15 20:24:14.675647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.464 [2024-07-15 20:24:14.676309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.464 [2024-07-15 20:24:14.676324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.464 [2024-07-15 20:24:14.676331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.464 [2024-07-15 20:24:14.676546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.464 [2024-07-15 20:24:14.676762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.464 [2024-07-15 20:24:14.676771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.464 [2024-07-15 20:24:14.676778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.464 [2024-07-15 20:24:14.680275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.464 [2024-07-15 20:24:14.689550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.464 [2024-07-15 20:24:14.690205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.464 [2024-07-15 20:24:14.690219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.464 [2024-07-15 20:24:14.690226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.464 [2024-07-15 20:24:14.690443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.464 [2024-07-15 20:24:14.690658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.464 [2024-07-15 20:24:14.690666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.464 [2024-07-15 20:24:14.690673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.464 [2024-07-15 20:24:14.694169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.464 [2024-07-15 20:24:14.703440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.464 [2024-07-15 20:24:14.703947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.464 [2024-07-15 20:24:14.703964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.464 [2024-07-15 20:24:14.703972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.464 [2024-07-15 20:24:14.704193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.464 [2024-07-15 20:24:14.704410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.464 [2024-07-15 20:24:14.704417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.464 [2024-07-15 20:24:14.704424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.464 [2024-07-15 20:24:14.707917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.464 [2024-07-15 20:24:14.717193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.464 [2024-07-15 20:24:14.717857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.464 [2024-07-15 20:24:14.717871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.464 [2024-07-15 20:24:14.717878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.464 [2024-07-15 20:24:14.718094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.464 [2024-07-15 20:24:14.718313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.464 [2024-07-15 20:24:14.718321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.464 [2024-07-15 20:24:14.718328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.464 [2024-07-15 20:24:14.721818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.464 [2024-07-15 20:24:14.731087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.464 [2024-07-15 20:24:14.731707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.464 [2024-07-15 20:24:14.731721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.464 [2024-07-15 20:24:14.731728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.464 [2024-07-15 20:24:14.731949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.464 [2024-07-15 20:24:14.732169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.464 [2024-07-15 20:24:14.732176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.464 [2024-07-15 20:24:14.732183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.464 [2024-07-15 20:24:14.735674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.464 [2024-07-15 20:24:14.744942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.464 [2024-07-15 20:24:14.745357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.464 [2024-07-15 20:24:14.745372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.464 [2024-07-15 20:24:14.745379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.464 [2024-07-15 20:24:14.745595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.464 [2024-07-15 20:24:14.745810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.464 [2024-07-15 20:24:14.745818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.465 [2024-07-15 20:24:14.745825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.465 [2024-07-15 20:24:14.749324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.465 [2024-07-15 20:24:14.758811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.465 [2024-07-15 20:24:14.759418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.465 [2024-07-15 20:24:14.759454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.465 [2024-07-15 20:24:14.759465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.465 [2024-07-15 20:24:14.759702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.465 [2024-07-15 20:24:14.759922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.465 [2024-07-15 20:24:14.759930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.465 [2024-07-15 20:24:14.759938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.465 [2024-07-15 20:24:14.763443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.465 [2024-07-15 20:24:14.772723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.465 [2024-07-15 20:24:14.773371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.465 [2024-07-15 20:24:14.773389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.465 [2024-07-15 20:24:14.773397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.465 [2024-07-15 20:24:14.773615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.465 [2024-07-15 20:24:14.773831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.465 [2024-07-15 20:24:14.773839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.465 [2024-07-15 20:24:14.773850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.465 [2024-07-15 20:24:14.777350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.465 [2024-07-15 20:24:14.786625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.465 [2024-07-15 20:24:14.787084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.465 [2024-07-15 20:24:14.787099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.465 [2024-07-15 20:24:14.787107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.465 [2024-07-15 20:24:14.787328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.465 [2024-07-15 20:24:14.787544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.465 [2024-07-15 20:24:14.787552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.465 [2024-07-15 20:24:14.787560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.465 [2024-07-15 20:24:14.791051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.465 [2024-07-15 20:24:14.800533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.465 [2024-07-15 20:24:14.801196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.465 [2024-07-15 20:24:14.801212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.465 [2024-07-15 20:24:14.801220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.465 [2024-07-15 20:24:14.801436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.465 [2024-07-15 20:24:14.801652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.465 [2024-07-15 20:24:14.801659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.465 [2024-07-15 20:24:14.801667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.465 [2024-07-15 20:24:14.805161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.465 [2024-07-15 20:24:14.814438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.465 [2024-07-15 20:24:14.815097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.465 [2024-07-15 20:24:14.815112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.465 [2024-07-15 20:24:14.815120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.465 [2024-07-15 20:24:14.815342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.465 [2024-07-15 20:24:14.815558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.465 [2024-07-15 20:24:14.815565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.465 [2024-07-15 20:24:14.815572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.465 [2024-07-15 20:24:14.819062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.465 [2024-07-15 20:24:14.828332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.465 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:17.465 [2024-07-15 20:24:14.828954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.465 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:17.465 [2024-07-15 20:24:14.828968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.465 [2024-07-15 20:24:14.828977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.465 [2024-07-15 20:24:14.829197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.465 20:24:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:17.465 [2024-07-15 20:24:14.829413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.465 [2024-07-15 20:24:14.829420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.465 [2024-07-15 20:24:14.829427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.465 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:17.465 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.465 [2024-07-15 20:24:14.832918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.465 [2024-07-15 20:24:14.842193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.465 [2024-07-15 20:24:14.842696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.465 [2024-07-15 20:24:14.842714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.465 [2024-07-15 20:24:14.842722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.465 [2024-07-15 20:24:14.842938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.465 [2024-07-15 20:24:14.843164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.466 [2024-07-15 20:24:14.843173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.466 [2024-07-15 20:24:14.843180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.466 [2024-07-15 20:24:14.846670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.466 [2024-07-15 20:24:14.855953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.466 [2024-07-15 20:24:14.856658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.466 [2024-07-15 20:24:14.856674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.466 [2024-07-15 20:24:14.856681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.466 [2024-07-15 20:24:14.856897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.466 [2024-07-15 20:24:14.857113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.466 [2024-07-15 20:24:14.857129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.466 [2024-07-15 20:24:14.857136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.466 [2024-07-15 20:24:14.860630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.466 [2024-07-15 20:24:14.869700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.466 20:24:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.466 20:24:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:17.466 [2024-07-15 20:24:14.870338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.466 [2024-07-15 20:24:14.870376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.466 [2024-07-15 20:24:14.870387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.466 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.466 [2024-07-15 20:24:14.870623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.466 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.466 [2024-07-15 20:24:14.870844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.466 [2024-07-15 20:24:14.870854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.466 [2024-07-15 20:24:14.870862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.466 [2024-07-15 20:24:14.874369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.466 [2024-07-15 20:24:14.876544] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.466 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.466 20:24:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:17.466 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.466 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.466 [2024-07-15 20:24:14.883444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.466 [2024-07-15 20:24:14.884135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.466 [2024-07-15 20:24:14.884154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.466 [2024-07-15 20:24:14.884161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.466 [2024-07-15 20:24:14.884378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.466 [2024-07-15 20:24:14.884594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.466 [2024-07-15 20:24:14.884602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.466 [2024-07-15 20:24:14.884609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.466 [2024-07-15 20:24:14.888100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.728 [2024-07-15 20:24:14.897377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.728 [2024-07-15 20:24:14.898055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.728 [2024-07-15 20:24:14.898071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.728 [2024-07-15 20:24:14.898079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.728 [2024-07-15 20:24:14.898299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.728 [2024-07-15 20:24:14.898515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.728 [2024-07-15 20:24:14.898523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.728 [2024-07-15 20:24:14.898535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.728 [2024-07-15 20:24:14.902025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.728 [2024-07-15 20:24:14.911303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.728 [2024-07-15 20:24:14.911939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.728 [2024-07-15 20:24:14.911954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.728 [2024-07-15 20:24:14.911961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.728 [2024-07-15 20:24:14.912181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.728 [2024-07-15 20:24:14.912398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.728 [2024-07-15 20:24:14.912406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.728 [2024-07-15 20:24:14.912413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.728 Malloc0 00:29:17.728 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.728 20:24:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:17.728 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.728 [2024-07-15 20:24:14.915902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.728 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.728 [2024-07-15 20:24:14.925179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.728 [2024-07-15 20:24:14.925817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.728 [2024-07-15 20:24:14.925832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.728 [2024-07-15 20:24:14.925840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.728 [2024-07-15 20:24:14.926055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.728 [2024-07-15 20:24:14.926275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.728 [2024-07-15 20:24:14.926283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.728 [2024-07-15 20:24:14.926290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.728 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.728 20:24:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:17.728 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.728 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.728 [2024-07-15 20:24:14.929779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.728 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.728 [2024-07-15 20:24:14.939048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.728 20:24:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:17.728 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.728 [2024-07-15 20:24:14.939726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.728 [2024-07-15 20:24:14.939741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcee3b0 with addr=10.0.0.2, port=4420 00:29:17.728 [2024-07-15 20:24:14.939752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee3b0 is same with the state(5) to be set 00:29:17.728 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.728 [2024-07-15 20:24:14.939967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee3b0 (9): Bad file descriptor 00:29:17.728 [2024-07-15 20:24:14.940188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.728 [2024-07-15 20:24:14.940197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.728 [2024-07-15 20:24:14.940204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.728 [2024-07-15 20:24:14.943697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.728 [2024-07-15 20:24:14.946008] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.728 20:24:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.729 20:24:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1170320 00:29:17.729 [2024-07-15 20:24:14.952973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.729 [2024-07-15 20:24:14.983522] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:27.795 00:29:27.795 Latency(us) 00:29:27.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.795 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:27.795 Verification LBA range: start 0x0 length 0x4000 00:29:27.795 Nvme1n1 : 15.00 8672.84 33.88 9860.35 0.00 6880.92 1044.48 19551.57 00:29:27.795 =================================================================================================================== 00:29:27.795 Total : 8672.84 33.88 9860.35 0.00 6880.92 1044.48 19551.57 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:27.795 rmmod nvme_tcp 00:29:27.795 rmmod nvme_fabrics 00:29:27.795 rmmod nvme_keyring 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1171804 ']' 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1171804 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1171804 ']' 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1171804 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1171804 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1171804' 00:29:27.795 killing process with pid 1171804 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1171804 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1171804 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:27.795 20:24:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.739 20:24:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:28.739 00:29:28.739 real 0m27.527s 00:29:28.739 user 1m2.928s 00:29:28.739 sys 0m6.901s 00:29:28.739 20:24:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:28.739 20:24:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:28.739 ************************************ 00:29:28.739 END TEST nvmf_bdevperf 00:29:28.739 ************************************ 00:29:28.739 20:24:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:28.739 20:24:26 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:28.739 20:24:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:28.739 20:24:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:28.739 20:24:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:28.739 ************************************ 00:29:28.739 START TEST nvmf_target_disconnect 00:29:28.739 ************************************ 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:28.739 * Looking for test storage... 00:29:28.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:28.739 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:29.001 20:24:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:29.001 20:24:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:29.001 20:24:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:29.001 20:24:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:29.001 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:29.001 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.001 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:29.001 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:29.001 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:29.001 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.001 20:24:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:29.001 20:24:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.001 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:29.001 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:29.001 20:24:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:29.001 20:24:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:35.590 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:35.590 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.590 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:35.591 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:35.591 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:35.591 20:24:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.591 20:24:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.851 20:24:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.851 20:24:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.851 20:24:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:35.851 20:24:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.851 20:24:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.851 20:24:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.851 20:24:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:36.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:29:36.111 00:29:36.111 --- 10.0.0.2 ping statistics --- 00:29:36.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.111 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:29:36.111 00:29:36.111 --- 10.0.0.1 ping statistics --- 00:29:36.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.111 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:36.111 ************************************ 00:29:36.111 START TEST nvmf_target_disconnect_tc1 00:29:36.111 ************************************ 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:36.111 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:36.112 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:36.112 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.112 [2024-07-15 20:24:33.471883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.112 [2024-07-15 20:24:33.471937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0be20 with addr=10.0.0.2, port=4420 00:29:36.112 [2024-07-15 20:24:33.471964] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:36.112 [2024-07-15 20:24:33.471978] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:36.112 [2024-07-15 20:24:33.471985] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:36.112 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:36.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:36.112 Initializing NVMe Controllers 00:29:36.112 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:29:36.112 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:36.112 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:36.112 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:36.112 00:29:36.112 real 0m0.112s 00:29:36.112 user 0m0.056s 00:29:36.112 sys 0m0.055s 00:29:36.112 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:36.112 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:36.112 ************************************ 00:29:36.112 END TEST nvmf_target_disconnect_tc1 00:29:36.112 ************************************ 00:29:36.112 20:24:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:36.112 20:24:33 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:36.112 20:24:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:36.112 20:24:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:36.112 20:24:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:36.371 ************************************ 00:29:36.371 START TEST nvmf_target_disconnect_tc2 00:29:36.371 ************************************ 00:29:36.371 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:29:36.371 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:36.371 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:36.371 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:36.371 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:36.371 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.371 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1177857 00:29:36.371 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1177857 00:29:36.371 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:36.371 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1177857 ']' 00:29:36.371 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.371 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:36.371 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.371 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:36.371 20:24:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.371 [2024-07-15 20:24:33.625769] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:29:36.371 [2024-07-15 20:24:33.625835] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.371 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.371 [2024-07-15 20:24:33.712846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:36.630 [2024-07-15 20:24:33.806409] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.630 [2024-07-15 20:24:33.806466] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.630 [2024-07-15 20:24:33.806474] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.630 [2024-07-15 20:24:33.806481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.630 [2024-07-15 20:24:33.806492] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.630 [2024-07-15 20:24:33.807101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:36.630 [2024-07-15 20:24:33.807234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:36.630 [2024-07-15 20:24:33.807598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:36.630 [2024-07-15 20:24:33.807601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.201 Malloc0 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.201 [2024-07-15 20:24:34.489238] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.201 [2024-07-15 20:24:34.517588] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1178049 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:37.201 20:24:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:37.201 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.116 20:24:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1177857 00:29:39.116 20:24:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Write completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Write completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Write completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Write completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Write completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Write completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Write completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Write completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Read completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 Write completed with error (sct=0, sc=8) 00:29:39.116 starting I/O failed 00:29:39.116 [2024-07-15 20:24:36.546399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.116 [2024-07-15 20:24:36.546884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.116 [2024-07-15 20:24:36.546901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.116 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.547369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.547407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.547859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.547877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.548404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.548442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.548846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.548858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.549379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.549417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.549762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.549774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.550355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.550393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.550706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.550718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.551038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.551048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.551449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.551461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.551899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.551909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.552231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.552241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.552682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.552692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.553167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.553178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.553395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.553411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.553825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.553836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.554270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.554281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.554684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.554694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.555119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.555137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.555457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.555467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.555740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.555749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.556131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.556142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.556602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.556612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.556956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.556965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.557414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.557452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.557886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.557898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.414 qpair failed and we were unable to recover it. 00:29:39.414 [2024-07-15 20:24:36.558395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.414 [2024-07-15 20:24:36.558432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.558817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.558829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.559210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.559229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.559629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.559641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.559980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.559990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.560395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.560406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.560837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.560848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.561388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.561424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.561856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.561868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.562335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.562371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.562669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.562683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.563107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.563117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.563499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.563509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.563918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.563928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.564430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.564467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.564896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.564908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.565398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.565434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.565860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.565871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.566375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.566411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.566841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.566852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.567245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.567256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.567573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.567583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.567976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.567986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.568375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.568385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.568863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.568872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.569379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.569416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.569873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.569885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.570406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.570442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.570917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.570928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.571350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.571387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.571822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.571834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.572260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.572271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.572662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.572673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.573003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.573013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.573353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.573363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.573752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.573762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.574063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.574072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.574460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.574470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.574894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.574904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.575334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.575344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.575765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.575777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.415 qpair failed and we were unable to recover it. 00:29:39.415 [2024-07-15 20:24:36.576162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.415 [2024-07-15 20:24:36.576172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.576574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.576583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.576943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.576953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.577332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.577344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.577671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.577681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.578059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.578070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.578410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.578420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.578817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.578827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.579246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.579257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.580382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.580405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.580793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.580804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.581181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.581193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.581587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.581596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.582021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.582030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.582430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.582440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.582897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.582906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.583330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.583340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.583741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.583750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.584049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.584061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.584226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.584237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.584661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.584670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.584972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.584981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.585386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.585396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.585762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.585772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.586191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.586200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.586592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.586601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.587004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.587014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.587500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.587510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.587895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.587904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.588294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.588306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.588672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.588682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.589016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.589026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.589504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.589514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.589885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.589895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.590276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.590286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.590738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.590747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.591134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.591144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.591560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.591569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.591987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.591996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.592282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.592292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.592717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.592727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.416 [2024-07-15 20:24:36.593152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.416 [2024-07-15 20:24:36.593162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.416 qpair failed and we were unable to recover it. 00:29:39.417 [2024-07-15 20:24:36.593421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.593431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.417 [2024-07-15 20:24:36.593871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.593880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.417 [2024-07-15 20:24:36.594262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.594272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.417 [2024-07-15 20:24:36.594700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.594710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.417 [2024-07-15 20:24:36.595084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.595093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.417 [2024-07-15 20:24:36.595595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.595605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.417 [2024-07-15 20:24:36.595998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.596007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.417 [2024-07-15 20:24:36.596321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.596330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.417 [2024-07-15 20:24:36.596719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.596729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.417 [2024-07-15 20:24:36.597150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.597160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.417 [2024-07-15 20:24:36.597570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.597579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.417 [2024-07-15 20:24:36.597953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.597962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.417 [2024-07-15 20:24:36.598379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.598389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.417 [2024-07-15 20:24:36.598798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.598808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.417 [2024-07-15 20:24:36.599241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.599253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.417 [2024-07-15 20:24:36.599675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.599684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.417 [2024-07-15 20:24:36.599788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.599800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 [2024-07-15 20:24:36.600342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Read completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 Write completed with error (sct=0, sc=8) 00:29:39.417 starting I/O failed 00:29:39.417 [2024-07-15 20:24:36.600855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:39.417 [2024-07-15 20:24:36.601325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.601359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e88000b90 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.417 [2024-07-15 20:24:36.601831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.417 [2024-07-15 20:24:36.601843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.417 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.602138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.602150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.602509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.602525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.602951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.602960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.603262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.603272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.603651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.603660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.604068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.604077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.604197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.604205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.604600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.604609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.605031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.605041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.605432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.605443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.605866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.605876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.606271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.606280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.606710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.606720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.607186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.607196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.607592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.607602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.607991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.608000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.608417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.608426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.608804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.608813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.609199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.609210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.609608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.609617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.610008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.610017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.610414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.610423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.610848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.610860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.611234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.611244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.611642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.611652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.612089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.612099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.612501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.612511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.612909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.612918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.613377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.613386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.613806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.613816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.614363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.614399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.614833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.614844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.615211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.615221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.418 [2024-07-15 20:24:36.615649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.418 [2024-07-15 20:24:36.615658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.418 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.616099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.616108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.616567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.616577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.616984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.616993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.617410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.617446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.617817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.617830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.618335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.618372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.618788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.618800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.619187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.619197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.619493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.619502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.619884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.619893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.620270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.620280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.620700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.620710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.621134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.621145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.621615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.621624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.621922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.621931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.622386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.622428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.622889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.622901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.623403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.623439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.623864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.623876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.624358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.624393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.624829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.624842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.625263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.625274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.625686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.625695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.626078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.626087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.626428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.626438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.626840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.626850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.627224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.627233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.627616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.627625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.627855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.627868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.628286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.628296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.628712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.628722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.629129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.629140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.629517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.629527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.629953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.629962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.630529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.630566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.630950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.630962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.631466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.631502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.419 qpair failed and we were unable to recover it. 00:29:39.419 [2024-07-15 20:24:36.631949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.419 [2024-07-15 20:24:36.631960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.632451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.632488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.632917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.632929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.633355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.633391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.633873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.633885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.634397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.634438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.634851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.634863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.635381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.635417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.635855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.635866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.636387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.636424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.636860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.636871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.637380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.637417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.637851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.637863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.638287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.638298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.638682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.638691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.639070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.639079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.639465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.639475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.639787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.639797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.640200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.640209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.640635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.640645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.641082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.641091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.641487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.641498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.641876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.641885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.642303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.642313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.642739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.642748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.643169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.643179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.643647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.643657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.644054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.644063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.644498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.644507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.644881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.644890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.645185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.645194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.645492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.645502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.645896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.645906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.646205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.646216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.646619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.646628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.647011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.647021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.647439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.647449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.647867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.647876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.648183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.648193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.648615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.648625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.649010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.649019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.420 [2024-07-15 20:24:36.649439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.420 [2024-07-15 20:24:36.649449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.420 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.649830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.649839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.650219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.650229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.650653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.650662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.651039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.651048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.651507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.651516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.651934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.651943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.652320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.652330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.652545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.652559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.652990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.653000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.653378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.653388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.653780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.653790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.654215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.654225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.654514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.654525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.654922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.654932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.655291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.655301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.655723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.655732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.656115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.656129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.656544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.656553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.656845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.656856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.657340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.657351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.657741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.657750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.658133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.658142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.658547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.658556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.658973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.658982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.659495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.659532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.659879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.659890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.660414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.660451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.660890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.660903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.661428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.661464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.661889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.661901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.662370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.662406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.662844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.662862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.663344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.663381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.663813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.663824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.664204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.664214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.664666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.664675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.665061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.665071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.665472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.421 [2024-07-15 20:24:36.665483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.421 qpair failed and we were unable to recover it. 00:29:39.421 [2024-07-15 20:24:36.665914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.665924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.666343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.666379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.666814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.666826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.667118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.667136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.667524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.667534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.667913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.667922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.668394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.668431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.668873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.668885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.669363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.669399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.669833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.669845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.670369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.670406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.670796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.670808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.671192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.671203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.671588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.671598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.671967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.671976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.672387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.672398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.672777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.672787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.673164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.673174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.673612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.673622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.674028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.674037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.674461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.674476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.674924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.674933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.675192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.675203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.675622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.675631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.676043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.676052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.676463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.676473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.676880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.676890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.677309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.677318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.677712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.677721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.678131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.678142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.678598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.678607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.678968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.678977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.679472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.679508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.679939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.679951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.680358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.680394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.680804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.680816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.681312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.681349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.681669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.681680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.681903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.681916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.682296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.682308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.682624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.682634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.422 qpair failed and we were unable to recover it. 00:29:39.422 [2024-07-15 20:24:36.683025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.422 [2024-07-15 20:24:36.683034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.683434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.683443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.683821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.683831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.684231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.684241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.684644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.684653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.685028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.685037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.685448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.685458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.685882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.685892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.686295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.686305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.686746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.686755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.687151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.687161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.687558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.687567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.687945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.687954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.688161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.688171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.688561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.688570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.688876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.688885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.689292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.689301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.689696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.689705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.690181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.690190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.690583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.690593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.691002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.691011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.691320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.691331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.691684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.691693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.692066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.692076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.692487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.692497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.692913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.692922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.693306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.693315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.693714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.693724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.694128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.694138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.694520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.694529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.694920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.694929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.695480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.423 [2024-07-15 20:24:36.695517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.423 qpair failed and we were unable to recover it. 00:29:39.423 [2024-07-15 20:24:36.695951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.695962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.696468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.696504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.696934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.696946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.697456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.697492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.697930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.697942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.698422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.698458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.698899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.698910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.699464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.699502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.699933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.699945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.700437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.700473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.700765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.700779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.701179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.701190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.701485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.701495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.701905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.701914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.702291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.702300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.702671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.702685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.703116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.703139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.703531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.703541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.703942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.703951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.704463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.704499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.704927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.704938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.705520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.705556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.705936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.705947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.706326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.706364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.706806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.706818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.707310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.707346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.707756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.707768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.708187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.708197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.708613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.708623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.709042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.709051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.709448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.709458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.709839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.709848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.710227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.710237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.710645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.710655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.711059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.711069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.711467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.711478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.711750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.711762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.712055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.712065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.712469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.712479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.712862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.712871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.713252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.424 [2024-07-15 20:24:36.713263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.424 qpair failed and we were unable to recover it. 00:29:39.424 [2024-07-15 20:24:36.713667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.713676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.714097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.714109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.714512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.714522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.714847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.714857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.715262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.715271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.715661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.715671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.716047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.716056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.716534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.716544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.716922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.716931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.717352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.717388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.717802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.717814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.718240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.718251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.718663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.718673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.718959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.718969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.719173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.719187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.719584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.719593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.720010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.720019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.720409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.720419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.720796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.720805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.721230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.721240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.721602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.721612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.721965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.721975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.722284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.722294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.722704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.722713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.723094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.723103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.723411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.723420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.723832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.723841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.724282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.724292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.724574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.724586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.724978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.724987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.725469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.725479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.725869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.725878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.726376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.425 [2024-07-15 20:24:36.726413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.425 qpair failed and we were unable to recover it. 00:29:39.425 [2024-07-15 20:24:36.726860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.726872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.727347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.727383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.727817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.727829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.728231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.728242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.728657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.728666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.729047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.729056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.729449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.729459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.729836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.729845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.730168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.730178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.730623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.730633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.731012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.731021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.731342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.731351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.731629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.731639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.732040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.732049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.732327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.732338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.732743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.732752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.733139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.733149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.733550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.733559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.733946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.733955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.734338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.734347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.734724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.734734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.735137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.735147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.735563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.735572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.735985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.735994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.736381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.736391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.736812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.736821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.737206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.737216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.737591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.737600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.737984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.737993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.738409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.738419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.738816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.738826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.739349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.739385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.739678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.739690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.740077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.740086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.740383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.740394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.740784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.740793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.741169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.741179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.741636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.426 [2024-07-15 20:24:36.741646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.426 qpair failed and we were unable to recover it. 00:29:39.426 [2024-07-15 20:24:36.742031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.742040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.742454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.742464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.742744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.742754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.743153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.743163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.743558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.743567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.743859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.743869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.744290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.744300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.744678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.744687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.745106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.745115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.745588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.745597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.746019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.746029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.746435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.746445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.746893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.746902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.747303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.747313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.747729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.747739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.748117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.748130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.748534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.748543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.748776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.748789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.749177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.749187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.749565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.749574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.749988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.749998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.750392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.750402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.750751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.750760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.751071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.751080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.751470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.751480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.751910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.751922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.752458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.752494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.752929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.752940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.753151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.753172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.753656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.753665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.754048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.754057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.754383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.754393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.754846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.427 [2024-07-15 20:24:36.754855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.427 qpair failed and we were unable to recover it. 00:29:39.427 [2024-07-15 20:24:36.755229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.755238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.755668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.755677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.756087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.756096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.756485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.756495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.756786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.756795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.757199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.757209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.757631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.757640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.758065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.758074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.758474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.758485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.758892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.758901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.759305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.759315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.759695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.759704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.760082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.760091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.760520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.760529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.760909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.760918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.761414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.761450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.761783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.761794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.762202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.762213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.762615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.762624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.762920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.762933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.763316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.763326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.763704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.763714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.764089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.764098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.764481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.764490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.764867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.764876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.765270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.765280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.765676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.765686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.765988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.765998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.766224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.766238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.766741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.766750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.767162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.767172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.767565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.767574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.767876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.767885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.768287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.768296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.768679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.768688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.428 [2024-07-15 20:24:36.769107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.428 [2024-07-15 20:24:36.769116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.428 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.769493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.769503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.769922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.769931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.770428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.770465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.770852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.770863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.771241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.771251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.771678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.771688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.772094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.772104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.772526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.772537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.772765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.772779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.773177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.773187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.773590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.773599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.773985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.773994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.774371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.774381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.774795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.774814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.775212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.775221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.775590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.775599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.776011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.776021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.776322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.776332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.776730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.776739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.777162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.777171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.777475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.777484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.777887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.777896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.778312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.778322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.778700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.778710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.779084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.779094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.779490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.779500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.779791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.779801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.780008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.780018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.780382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.780391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.780772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.780781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.781158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.781167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.781566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.781575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.781950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.781959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.782251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.782261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.782659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.782667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.783047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.783056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.783500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.783510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.783916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.783925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.784325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.784335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.784715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.784724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.785021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.429 [2024-07-15 20:24:36.785030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.429 qpair failed and we were unable to recover it. 00:29:39.429 [2024-07-15 20:24:36.785251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.785261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.785650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.785659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.786046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.786055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.786435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.786444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.786753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.786762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.787163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.787173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.787565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.787574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.787991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.788001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.788420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.788429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.788806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.788815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.789232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.789243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.789650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.789659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.790061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.790070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.790471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.790481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.790876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.790885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.791201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.791219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.791622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.791631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.791913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.791922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.792304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.792314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.792701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.792711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.793092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.793101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.793478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.793488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.793902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.793911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.794290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.794299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.794705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.794715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.795096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.795105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.795548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.795558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.795947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.795956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.796466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.796503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.796980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.796991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.797368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.797405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.797817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.797829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.798132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.798142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.798581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.798590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.798978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.798987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.799485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.799521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.430 [2024-07-15 20:24:36.799958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.430 [2024-07-15 20:24:36.799970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.430 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.800451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.800492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.800912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.800924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.801337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.801373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.801803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.801814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.802097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.802107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.802518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.802528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.802904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.802913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.803445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.803482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.803919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.803931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.804415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.804451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.804893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.804905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.805436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.805473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.805918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.805929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.806151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.806166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.806468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.806478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.806845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.806854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.807250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.807261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.807752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.807761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.808150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.808161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.431 [2024-07-15 20:24:36.808371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.431 [2024-07-15 20:24:36.808382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.431 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.808719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.808730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.809063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.809073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.809550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.809560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.809938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.809947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.810221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.810232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.810657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.810666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.811047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.811056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.811470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.811483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.811785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.811795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.812210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.812219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.812597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.812606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.812985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.812994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.813373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.813383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.813763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.813773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.814154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.814163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.814571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.814580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.815006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.815015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.815315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.815325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.815688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.815697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.815995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.816004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.816386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.816395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.700 qpair failed and we were unable to recover it. 00:29:39.700 [2024-07-15 20:24:36.816696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.700 [2024-07-15 20:24:36.816706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.817112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.817135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.817498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.817507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.817886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.817895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.818319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.818328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.818723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.818732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.819120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.819135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.819419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.819428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.819818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.819827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.820284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.820294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.820584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.820594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.820997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.821007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.821340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.821350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.821737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.821746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.822150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.822160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.822567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.822576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.822958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.822967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.823372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.823381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.823746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.823755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.824248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.824259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.824567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.824577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.825007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.825017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.825478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.825488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.825888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.825898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.826263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.826272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.826685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.826694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.827015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.827024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.827413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.827424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.827810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.827819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.828200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.828209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.828612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.828622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.829032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.829040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.829418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.829428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.829839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.829848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.830279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.830288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.830507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.830519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.830990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.830999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.831385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.831394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.831706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.831715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.832110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.832119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.832545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.832555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.832959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.832969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.833480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.833517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.833906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.833917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.834475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.834511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.834952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.834964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.835337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.835374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.835783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.835795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.701 [2024-07-15 20:24:36.836339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.701 [2024-07-15 20:24:36.836376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.701 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.836691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.836703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.837142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.837153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.837453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.837463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.837795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.837805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.838206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.838217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.838602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.838616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.839015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.839024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.839437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.839447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.839700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.839709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.840201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.840211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.840604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.840613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.840993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.841002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.841390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.841399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.841893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.841902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.842277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.842287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.842696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.842705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.843003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.843012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.843444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.843454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.843829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.843838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.844260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.844271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.844696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.844706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.845109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.845119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.845609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.845619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.846014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.846024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.846432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.846442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.846816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.846826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.847226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.847236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.847438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.847452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.847856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.847865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.848189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.848198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.848595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.848604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.849012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.849021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.849418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.849430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.849806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.849815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.850195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.850205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.850535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.850544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.850974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.850984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.851332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.851341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.851739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.851748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.852164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.852174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.852470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.852479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.852880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.852889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.853285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.853294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.853736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.853746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.854136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.854146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.702 [2024-07-15 20:24:36.854539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.702 [2024-07-15 20:24:36.854548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.702 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.854875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.854885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.855282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.855292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.855687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.855696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.856118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.856131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.856585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.856594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.856968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.856978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.857389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.857425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.857860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.857872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.858254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.858265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.858678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.858687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.859126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.859137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.859534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.859543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.859961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.859971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.860465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.860501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.860933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.860945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.861459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.861496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.861805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.861817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.862229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.862240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.862550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.862559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.862965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.862974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.863357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.863367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.863764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.863773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.863989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.863999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.864432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.864442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.864826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.864835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.865358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.865394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.865827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.865838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.866223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.866235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.866652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.866662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.866961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.866972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.867373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.867383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.867762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.867772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.868156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.868165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.868549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.868558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.868938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.868947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.869205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.869214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.869531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.869540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.869923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.869933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.870334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.870344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.870750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.870760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.871162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.871172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.871580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.871589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.871969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.871978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.872376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.872385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.872595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.872608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.873018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.873028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.873434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.873444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.703 [2024-07-15 20:24:36.873895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.703 [2024-07-15 20:24:36.873905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.703 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.874303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.874313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.874707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.874717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.875104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.875113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.875535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.875545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.875945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.875954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.876439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.876475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.876904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.876920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.877464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.877500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.877946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.877958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.878364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.878375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.878757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.878766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.879075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.879085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.879494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.879503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.879879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.879888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.880415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.880452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.880886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.880898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.881353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.881390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.881831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.881843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.882357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.882394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.882828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.882840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.883135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.883147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.883443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.883453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.883722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.883733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.884170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.884180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.884583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.884594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.885000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.885009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.885477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.885486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.885873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.885883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.886285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.886295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.886697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.886706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.887133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.887143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.887450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.887459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.887866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.887875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.888286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.888298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.888724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.888734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.889189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.889200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.704 [2024-07-15 20:24:36.889561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.704 [2024-07-15 20:24:36.889570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.704 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.889978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.889987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.890258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.890268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.890655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.890664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.891039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.891049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.891508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.891518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.891807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.891817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.892210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.892220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.892625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.892634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.893023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.893032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.893437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.893447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.893833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.893843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.894246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.894256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.894656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.894665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.895129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.895139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.895536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.895545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.895962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.895972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.896499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.896536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.896989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.897001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.897431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.897441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.897734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.897743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.898090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.898099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.898550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.898560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.898964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.898973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.899380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.899422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.899864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.899875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.900426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.900463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.900885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.900896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.901382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.901418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.901851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.901863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.902255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.902266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.902696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.902705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.903151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.903161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.903590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.903599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.904022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.904031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.904429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.904439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.904796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.904806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.905207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.905217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.905374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.905386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.905757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.905766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.906194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.906204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.906595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.906604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.907062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.907072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.907495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.907505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.907923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.907933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.908327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.908337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.705 [2024-07-15 20:24:36.908762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.705 [2024-07-15 20:24:36.908772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.705 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.909163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.909172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.909582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.909592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.909968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.909977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.910288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.910298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.910702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.910712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.911087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.911096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.911464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.911474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.911867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.911876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.912270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.912280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.912675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.912684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.913062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.913071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.913457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.913466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.913866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.913876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.914286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.914296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.914694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.914703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.915083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.915093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.915493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.915502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.915880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.915890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.916403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.916440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.916877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.916889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.917370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.917406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.917761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.917773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.918176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.918187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.918591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.918601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.919009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.919019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.919434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.919444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.919752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.919762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.920170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.920180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.920511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.920521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.920941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.920950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.921327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.921337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.921732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.921742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.922149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.922159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.922575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.922584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.922959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.922968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.923363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.923373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.923661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.923670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.924092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.924101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.924496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.924505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.924919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.924928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.925227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.925238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.925732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.925742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.926117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.926133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.926365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.926378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.926786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.926795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.927173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.927186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.927596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.927605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.927984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.927993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.928432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.706 [2024-07-15 20:24:36.928441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.706 qpair failed and we were unable to recover it. 00:29:39.706 [2024-07-15 20:24:36.928827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.928836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.929334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.929370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.929788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.929799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.930218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.930229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.930613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.930622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.930915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.930927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.931344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.931354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.931729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.931738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.932033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.932043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.932443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.932453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.932836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.932845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.933310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.933319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.933710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.933719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.934098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.934107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.934512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.934522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.934898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.934908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.935360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.935396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.935836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.935848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.936235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.936245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.936734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.936744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.937136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.937146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.937556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.937565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.937946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.937956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.938351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.938365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.938793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.938803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.939111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.939120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.939573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.939583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.940002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.940012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.940400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.940409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.940747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.940756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.941183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.941192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.941685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.941695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.942096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.942105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.942553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.942562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.942953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.942963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.943445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.943481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.943915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.943927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.944426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.944462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.944897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.944908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.945330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.945366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.945802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.945813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.946333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.946370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.946809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.946820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.947241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.947252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.947670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.947680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.947990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.707 [2024-07-15 20:24:36.948000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.707 qpair failed and we were unable to recover it. 00:29:39.707 [2024-07-15 20:24:36.948408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.948418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.948813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.948823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.949116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.949131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.949538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.949547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.949928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.949937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.950419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.950455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.950899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.950911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.951460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.951496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.951987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.951998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.952520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.952556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.953032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.953044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.953542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.953579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.954012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.954024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.954435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.954446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.954851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.954861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.955382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.955418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.955846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.955858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.956235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.956246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.956665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.956675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.957055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.957064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.957459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.957469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.957848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.957858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.958310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.958320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.958719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.958729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.959154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.959164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.959589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.959598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.959973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.959982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.960277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.960287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.960560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.960571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.961019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.961028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.961407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.961416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.961839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.961848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.962225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.962234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.962611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.962620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.962996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.963005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.963392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.963402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.963782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.963792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.964098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.964108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.964429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.964440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.964913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.964923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.965328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.965338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.965721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.965730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.966104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.966113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.966533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.708 [2024-07-15 20:24:36.966543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.708 qpair failed and we were unable to recover it. 00:29:39.708 [2024-07-15 20:24:36.966957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.966966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.967455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.967496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.967742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.967757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.968140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.968150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.968561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.968571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.968974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.968983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.969361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.969371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.969681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.969690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.970089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.970098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.970467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.970477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.970880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.970890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.971296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.971305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.971707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.971716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.972087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.972096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.972478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.972487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.972707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.972719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.973131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.973141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.973549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.973558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.973859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.973868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.974371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.974407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.974836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.974848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.975228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.975239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.975650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.975659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.976065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.976075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.976472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.976481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.976939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.976948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.977420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.977456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.977877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.977889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.978418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.978459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.978879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.978890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.979406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.979442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.979856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.979867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.980361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.980398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.980827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.980838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.981258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.981268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.981648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.981657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.982082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.982091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.982478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.982488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.982871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.982881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.983250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.983260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.983643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.983653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.984030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.984039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.984443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.984453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.984823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.984834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.985183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.985193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.985606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.985615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.986026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.709 [2024-07-15 20:24:36.986035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.709 qpair failed and we were unable to recover it. 00:29:39.709 [2024-07-15 20:24:36.986450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.986460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.986889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.986898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.987274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.987284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.987702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.987712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.988114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.988128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.988533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.988543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.988932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.988941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.989489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.989526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.990038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.990053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.990485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.990495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.990872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.990881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.991335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.991371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.991807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.991819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.992350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.992387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.992827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.992839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.993217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.993227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.993642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.993652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.994151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.994161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.994548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.994558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.994941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.994951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.995346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.995355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.995745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.995755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.996136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.996146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.996597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.996607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.997008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.997017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.997429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.997439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.997846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.997856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.998273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.998283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.998561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.998570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.998950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.998959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.999341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.999351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:36.999740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:36.999749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:37.000127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:37.000137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:37.000531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:37.000540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:37.000843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:37.000853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:37.001247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:37.001257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:37.001653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:37.001663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:37.001954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:37.001966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:37.002362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:37.002372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:37.002696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:37.002705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:37.003115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:37.003133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:37.003558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:37.003567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:37.003945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:37.003954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:37.004422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:37.004458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:37.004889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:37.004901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:37.005386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:37.005422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:37.005851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:37.005862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.710 [2024-07-15 20:24:37.006332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.710 [2024-07-15 20:24:37.006368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.710 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.006810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.006822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.007202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.007213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.007500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.007514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.007954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.007963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.008338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.008347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.008783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.008792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.009169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.009178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.009409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.009420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.009826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.009836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.010286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.010295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.010671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.010680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.011054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.011063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.011446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.011456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.011837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.011847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.012248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.012257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.012659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.012669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.013054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.013064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.013456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.013466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.013864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.013874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.014263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.014273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.014630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.014639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.014960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.014969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.015276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.015286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.015703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.015712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.016135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.016144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.016592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.016601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.016980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.016989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.017368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.017378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.017768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.017779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.018158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.018167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.018576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.018585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.019004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.019014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.019439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.019449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.019876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.019885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.020267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.020277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.020672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.020681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.021087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.021096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.021514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.021523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.021898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.021907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.022292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.022301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.022683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.022693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.023117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.711 [2024-07-15 20:24:37.023146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.711 qpair failed and we were unable to recover it. 00:29:39.711 [2024-07-15 20:24:37.023560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.023570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.023967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.023976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.024452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.024488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.024876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.024888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.025359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.025396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.025828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.025840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.026339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.026376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.026766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.026778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.027160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.027171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.027505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.027514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.027939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.027949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.028344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.028353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.028576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.028590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.028998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.029012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.029432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.029442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.029826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.029836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.030136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.030148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.030576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.030585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.030793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.030805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.031199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.031210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.031598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.031607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.032007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.032016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.032341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.032351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.032756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.032765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.033141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.033151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.033497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.033506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.033909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.033918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.034362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.034371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.034763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.034773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.035175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.035185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.035634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.035643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.036037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.036046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.036451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.036460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.036840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.036852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.037253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.037264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.037743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.037752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.038130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.038140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.038517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.038526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.038966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.038975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.039354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.039364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.039764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.039774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.040178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.040188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.040585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.040594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.040974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.040983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.041363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.041373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.041678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.041687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.042084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.042093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.042473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.042483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.042860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.042870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.043248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.043259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.712 qpair failed and we were unable to recover it. 00:29:39.712 [2024-07-15 20:24:37.043679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.712 [2024-07-15 20:24:37.043689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.044079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.044089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.044490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.044499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.044875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.044884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.045393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.045429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.045863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.045875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.046268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.046278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.046686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.046696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.047102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.047112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.047539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.047550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.047944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.047954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.048351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.048387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.048796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.048809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.049321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.049357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.049792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.049803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.050186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.050196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.050593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.050602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.050981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.050991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.051233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.051243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.051639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.051649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.052052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.052061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.052356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.052368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.052814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.052824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.053266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.053276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.053653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.053662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.054050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.054059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.054443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.054453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.054866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.054875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.055251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.055261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.055656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.055665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.056073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.056083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.056474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.056487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.056887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.056897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.057273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.057283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.057661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.057670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.058115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.058130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.058532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.058541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.058925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.058934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.059429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.059465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.059900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.059912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.060380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.060417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.060843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.060854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.061323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.061360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.061760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.061772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.062149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.062159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.062474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.062483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.713 [2024-07-15 20:24:37.062886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.713 [2024-07-15 20:24:37.062896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.713 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.063294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.063304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.063750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.063760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.064033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.064044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.064446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.064456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.064912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.064922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.065299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.065309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.065703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.065713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.066130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.066141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.066539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.066549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.066837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.066847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.067255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.067264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.067637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.067649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.067964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.067973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.068368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.068378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.068796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.068805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.069188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.069197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.069489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.069499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.069902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.069911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.070284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.070293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.070708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.070717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.071117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.071131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.071562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.071571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.071994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.072003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.072386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.072396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.072702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.072711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.073102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.073112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.073427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.073437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.073840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.073850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.074249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.074259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.074696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.074705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.075093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.075102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.075403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.075414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.075813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.075822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.076201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.076211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.076642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.076651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.077061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.077070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.077283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.077296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.077700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.077710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.078139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.078154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.078555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.078564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.078941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.078950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.079337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.079347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.079680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.079689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.080087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.080096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.080476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.080486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.080879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.080888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.081267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.081277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.081678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.081687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.082105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.082115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.714 qpair failed and we were unable to recover it. 00:29:39.714 [2024-07-15 20:24:37.082536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.714 [2024-07-15 20:24:37.082546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.082852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.082861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.083200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.083209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.083637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.083647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.084027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.084036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.084443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.084453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.084835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.084845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.085163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.085172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.085548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.085558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.085973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.085982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.086358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.086368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.086769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.086779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.087179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.087188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.087474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.087484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.087887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.087896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.088271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.088281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.088680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.088689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.089067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.089076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.089475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.089485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.089792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.089801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.090195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.090204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.090588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.090597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.090974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.090983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.091414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.091424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.091746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.091755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.092134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.092144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.092548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.092557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.092931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.092940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.093482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.093519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.093941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.093953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.094472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.094512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.094946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.094959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.095452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.095488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.095912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.095923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.096424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.096461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.096893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.096905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.097377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.097413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.097835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.097847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.098355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.715 [2024-07-15 20:24:37.098392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.715 qpair failed and we were unable to recover it. 00:29:39.715 [2024-07-15 20:24:37.098720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.098732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.099131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.099142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.099567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.099576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.099959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.099968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.100433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.100469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.100909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.100921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.101400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.101437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.101840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.101852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.102148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.102167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.102587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.102596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.103014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.103023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.103468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.103477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.103690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.103703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.104115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.104137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.104524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.104533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.104933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.104942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.105344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.105354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.105856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.105865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.106072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.106088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.106373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.106383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.106835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.106845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.107349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.107385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.107813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.107825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.108203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.108214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.108431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.108443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.108834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.108844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.109216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.109226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.109640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.109649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.109933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.109943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.110279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.110289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.110666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.110675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.111053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.111063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.111541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.111550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.112001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.112010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.112401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.112411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.112719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.112728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.113141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.113151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.113594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.113603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.113983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.113993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.114381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.114391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.114698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.114707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.115100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.115109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.115533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.115543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.115961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.115969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.116584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.116620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.116993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.117010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.117391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.117402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.117698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.716 [2024-07-15 20:24:37.117708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.716 qpair failed and we were unable to recover it. 00:29:39.716 [2024-07-15 20:24:37.118114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.717 [2024-07-15 20:24:37.118134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.717 qpair failed and we were unable to recover it. 00:29:39.717 [2024-07-15 20:24:37.118418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.717 [2024-07-15 20:24:37.118427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.717 qpair failed and we were unable to recover it. 00:29:39.717 [2024-07-15 20:24:37.118853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.717 [2024-07-15 20:24:37.118862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.717 qpair failed and we were unable to recover it. 00:29:39.717 [2024-07-15 20:24:37.119241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.717 [2024-07-15 20:24:37.119251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.717 qpair failed and we were unable to recover it. 00:29:39.717 [2024-07-15 20:24:37.119671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.717 [2024-07-15 20:24:37.119681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.717 qpair failed and we were unable to recover it. 00:29:39.717 [2024-07-15 20:24:37.120067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.717 [2024-07-15 20:24:37.120076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.717 qpair failed and we were unable to recover it. 00:29:39.717 [2024-07-15 20:24:37.120467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.717 [2024-07-15 20:24:37.120477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.717 qpair failed and we were unable to recover it. 00:29:39.717 [2024-07-15 20:24:37.120852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.717 [2024-07-15 20:24:37.120862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.717 qpair failed and we were unable to recover it. 00:29:39.717 [2024-07-15 20:24:37.121266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.717 [2024-07-15 20:24:37.121276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.717 qpair failed and we were unable to recover it. 00:29:39.717 [2024-07-15 20:24:37.121482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.717 [2024-07-15 20:24:37.121495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.717 qpair failed and we were unable to recover it. 00:29:39.717 [2024-07-15 20:24:37.121913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.717 [2024-07-15 20:24:37.121922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.717 qpair failed and we were unable to recover it. 00:29:39.717 [2024-07-15 20:24:37.122334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.717 [2024-07-15 20:24:37.122344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.717 qpair failed and we were unable to recover it. 00:29:39.717 [2024-07-15 20:24:37.122725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.717 [2024-07-15 20:24:37.122734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.717 qpair failed and we were unable to recover it. 00:29:39.717 [2024-07-15 20:24:37.123117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.717 [2024-07-15 20:24:37.123140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.717 qpair failed and we were unable to recover it. 00:29:39.988 [2024-07-15 20:24:37.123532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-07-15 20:24:37.123543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-07-15 20:24:37.123929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-07-15 20:24:37.123938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-07-15 20:24:37.124333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-07-15 20:24:37.124369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-07-15 20:24:37.124777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-07-15 20:24:37.124789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-07-15 20:24:37.125170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-07-15 20:24:37.125181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-07-15 20:24:37.125583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-07-15 20:24:37.125592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-07-15 20:24:37.125972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-07-15 20:24:37.125982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-07-15 20:24:37.126403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-07-15 20:24:37.126413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-07-15 20:24:37.126626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-07-15 20:24:37.126640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-07-15 20:24:37.127043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-07-15 20:24:37.127052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.988 qpair failed and we were unable to recover it. 00:29:39.988 [2024-07-15 20:24:37.127537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.988 [2024-07-15 20:24:37.127547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.127969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.127979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.128472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.128508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.128941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.128952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.129441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.129478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.129933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.129945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.130427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.130464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.130900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.130911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.131388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.131424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.131859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.131870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.132386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.132423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.132855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.132866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.133354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.133390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.133823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.133835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.134273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.134284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.134598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.134608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.134968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.134978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.135369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.135379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.135765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.135774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.136151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.136160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.136555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.136564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.136941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.136950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.137332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.137342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.137750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.137759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.138157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.138167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.138564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.138573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.138965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.138974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.139342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.139351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.139725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.139735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.140047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.140057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.140438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.140449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.140823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.140833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.141213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.141223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.141638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.141647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.142107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.989 [2024-07-15 20:24:37.142116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.989 qpair failed and we were unable to recover it. 00:29:39.989 [2024-07-15 20:24:37.142498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.142507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.142894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.142903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.143282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.143292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.143696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.143706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.144102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.144111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.144500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.144509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.144953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.144965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.145481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.145517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.145951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.145963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.146482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.146518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.146927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.146939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.147448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.147484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.147918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.147930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.148403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.148439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.148881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.148893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.149094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.149104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.149306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.149321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.149733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.149743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.150145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.150155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.150590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.150599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.150975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.150984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.151361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.151371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.151665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.151675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.152077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.152086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.152489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.152499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.152877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.152887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.153290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.153299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.153683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.153692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.154067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.154076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.154358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.154374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.154773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.154782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.155163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.155172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.155564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.155573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.155954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.155966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.156346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.156355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.990 [2024-07-15 20:24:37.156780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.990 [2024-07-15 20:24:37.156789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.990 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.157179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.157189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.157611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.157620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.157996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.158004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.158464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.158474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.158848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.158858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.159267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.159276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.159707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.159716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.160166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.160175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.160592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.160602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.161003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.161013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.161429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.161438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.161732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.161743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.162147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.162157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.162462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.162472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.162875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.162884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.163264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.163274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.163678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.163687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.164063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.164072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.164491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.164500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.164958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.164967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.165343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.165353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.165746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.165756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.166146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.166156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.166547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.166556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.166975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.166987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.167391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.167400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.167823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.167831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.168250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.168259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.168755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.168764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.169141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.169151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.169563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.169572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.169957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.169966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.170344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.170353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.170634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.170645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.171044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.171053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.171439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.991 [2024-07-15 20:24:37.171449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.991 qpair failed and we were unable to recover it. 00:29:39.991 [2024-07-15 20:24:37.171865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.171874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.172282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.172291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.172767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.172777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.173165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.173175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.173582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.173590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.174000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.174009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.174437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.174447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.174871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.174880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.175262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.175272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.175672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.175682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.176099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.176108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.176529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.176539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.176930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.176939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.177361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.177398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.177778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.177790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.178214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.178225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.178617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.178627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.179002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.179012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.179318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.179328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.179740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.179750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.180152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.180162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.180630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.180639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.181018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.181027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.181390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.181399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.181771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.181781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.182097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.182106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.182513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.182524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.182904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.182913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.183292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.183302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.183697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.183709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.184090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.184099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.184316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.184330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.184739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.184748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.185126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.185136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.185544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.185554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.185976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.185986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 20:24:37.186470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.992 [2024-07-15 20:24:37.186507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.186944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.186956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.187381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.187418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.187843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.187855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.188349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.188385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.188821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.188833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.189340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.189376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.189804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.189816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.190235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.190246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.190655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.190665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.191066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.191076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.191498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.191509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.191889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.191898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.192313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.192323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.192693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.192703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.193083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.193092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.193501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.193511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.193984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.193994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.194476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.194513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.194824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.194835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.195240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.195256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.195678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.195688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.196109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.196119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.196539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.196549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.196954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.196963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.197446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.197483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.197911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.197923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.198434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.198471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.198818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.198830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.199276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.199312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.199734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.199746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.200162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.200172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.200555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.200564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 20:24:37.200944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.993 [2024-07-15 20:24:37.200953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.201169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.201179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.201566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.201575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.201953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.201962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.202337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.202347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.202726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.202735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.203157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.203168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.203560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.203569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.203782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.203796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.204206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.204217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.204607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.204616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.204999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.205008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.205402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.205411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.205712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.205721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.206035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.206047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.206443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.206453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.206855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.206865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.207242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.207260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.207642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.207651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.208058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.208067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.208285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.208297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.208595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.208605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.208986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.208996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.209380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.209390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.209763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.209773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.210243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.210253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.210659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.210668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.211059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.211068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.211484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.211493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.211918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.211927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.212306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.212315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.212693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.212702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.213084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.213093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.213471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.994 [2024-07-15 20:24:37.213481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 20:24:37.213908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.213917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.214400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.214436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.214848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.214860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.215263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.215274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.215676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.215685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.216070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.216079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.216341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.216351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.216765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.216774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.217155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.217165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.217568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.217577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.217953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.217964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.218363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.218373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.218681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.218691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.218991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.219000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.219256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.219267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.219556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.219565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.219983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.219992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.220372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.220381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.220781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.220790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.221187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.221196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.221571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.221580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.221984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.221994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.222414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.222425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.222731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.222741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.223142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.223152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.223532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.223541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.223918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.223927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.224307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.224317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.224697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.224706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.225018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.225027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.225413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.225422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.225723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.225732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.226159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.226168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.226564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.226573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.226992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.227001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.227380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.227390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.227784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.227794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.228196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.228206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.228599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.228608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.228985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.228995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.229396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.995 [2024-07-15 20:24:37.229405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.995 qpair failed and we were unable to recover it. 00:29:39.995 [2024-07-15 20:24:37.229796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.229806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.230119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.230145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.230524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.230533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.230919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.230928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.231478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.231514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.231809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.231822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.232132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.232143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.232544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.232558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.232977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.232987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.233476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.233513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.233952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.233964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.234465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.234502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.234817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.234829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.235351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.235387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.235821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.235833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.236105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.236117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.236537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.236547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.236858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.236868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.237373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.237409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.237839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.237851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.238395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.238431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.238846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.238858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.239239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.239250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.239670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.239680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.240060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.240069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.240463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.240472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.240850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.240860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.241288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.241298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.241650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.241659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.242054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.242063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.242464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.242473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.242889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.996 [2024-07-15 20:24:37.242898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.996 qpair failed and we were unable to recover it. 00:29:39.996 [2024-07-15 20:24:37.243321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.243330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.243710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.243719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.244095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.244109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.244489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.244499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.244743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.244752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.245158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.245168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.245486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.245495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.245874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.245883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.246264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.246274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.246676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.246685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.246966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.246976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.247423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.247432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.247808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.247817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.248197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.248206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.248513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.248522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.248884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.248893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.249302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.249311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.249705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.249714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.250117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.250134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.250533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.250542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.250920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.250929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.251399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.251435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.251868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.251880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.252386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.252421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.252864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.252876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.253353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.253390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.253899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.253910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.254445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.254481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.254915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.254927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.255422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.255464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.255877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.255889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.256395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.256431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.256867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.256878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.257347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.257384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.257790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.257802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.258184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.258195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.258602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.258612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.259015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.259024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.259451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.259462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.259894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.259904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.260259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.260269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.997 qpair failed and we were unable to recover it. 00:29:39.997 [2024-07-15 20:24:37.260672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.997 [2024-07-15 20:24:37.260682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.261069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.261078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.261467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.261478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.261852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.261861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.262242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.262252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.262667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.262676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.263102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.263111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.263489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.263500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.263900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.263910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.264444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.264481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.264900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.264912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.265335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.265372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.265858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.265869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.266343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.266380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.266707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.266718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.267115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.267131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.267559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.267569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.267962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.267972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.268462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.268499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.268924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.268935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.269432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.269469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.269822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.269834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.270356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.270392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.270786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.270797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.271173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.271183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.998 [2024-07-15 20:24:37.271587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.998 [2024-07-15 20:24:37.271596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.998 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.272012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.272022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.272428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.272438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.272847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.272857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.273240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.273254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.273685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.273695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.274072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.274081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.274535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.274545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.274960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.274970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.275480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.275517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.275821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.275832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.276236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.276246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.276674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.276683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.276985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.276996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.277400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.277410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.277783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.277792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.278174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.278184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.278621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.278630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.279006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.279015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.279428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.279438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.279820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.279829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.280041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.280053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.280453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.280463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.280835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.280845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.281165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.281175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.281582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.281591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.281966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.281975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.282392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.282402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.282861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.282869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.283297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.283307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.283684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.283694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.284098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.284111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.284534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.284544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.284969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.284979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.285472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.285509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.285922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.285934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.286433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.286469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.286909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.286921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.287336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.287372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:39.999 [2024-07-15 20:24:37.287807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.999 [2024-07-15 20:24:37.287819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:39.999 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.288332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.288368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.288875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.288886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.289416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.289453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.289887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.289898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.290413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.290449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.290853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.290865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.291363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.291399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.291832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.291845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.292249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.292260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.292691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.292701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.293105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.293114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.293519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.293529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.293898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.293908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.294343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.294379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.294717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.294728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.295141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.295151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.295565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.295575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.295999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.296008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.296385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.296399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.296824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.296833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.297245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.297255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.297676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.297686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.298087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.298096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.298484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.298494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.298776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.298785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.299158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.299168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.299382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.299397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.299802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.299812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.300212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.300221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.300602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.300612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.301028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.301037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.301415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.301424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.301835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.301845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.302274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.302284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.302700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.302710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.303110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.303119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.303538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.303548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.303923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.303932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.304306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.304316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.304725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.000 [2024-07-15 20:24:37.304734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.000 qpair failed and we were unable to recover it. 00:29:40.000 [2024-07-15 20:24:37.305219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.305228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.305637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.305646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.306103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.306112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.306520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.306530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.306828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.306837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.307244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.307254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.307690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.307699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.308075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.308084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.308490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.308500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.308914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.308924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.309412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.309449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.309884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.309896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.310327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.310364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.310789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.310800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.311218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.311236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.311625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.311635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.312052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.312061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.312454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.312464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.312842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.312851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.313178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.313189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.313589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.313598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.313880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.313890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.314334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.314344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.314719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.314728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.315105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.315114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.315494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.315504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.315886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.315896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.316305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.316315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.316699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.316708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.317090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.317100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.317507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.317517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.317915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.317924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.318421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.318457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.318890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.318902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.319372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.319409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.319851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.319863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.320360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.320396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.320829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.320840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.321222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.321232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.321641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.321651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.322057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.001 [2024-07-15 20:24:37.322066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.001 qpair failed and we were unable to recover it. 00:29:40.001 [2024-07-15 20:24:37.322426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.322436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.322837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.322846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.323270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.323280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.323626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.323636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.324066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.324076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.324289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.324307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.324626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.324637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.325027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.325036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.325435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.325445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.325837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.325846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.326239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.326249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.326687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.326697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.327131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.327142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.327591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.327600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.327998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.328007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.328353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.328363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.328790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.328799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.329202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.329212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.329615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.329624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.330006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.330015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.330432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.330441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.330817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.330826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.331223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.331233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.331644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.331653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.331968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.331977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.332423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.332432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.332728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.332739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.333137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.333147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.333546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.333556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.333991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.334001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.334416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.334426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.334829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.334838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.335204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.335216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.335621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.335630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.336032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.002 [2024-07-15 20:24:37.336041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.002 qpair failed and we were unable to recover it. 00:29:40.002 [2024-07-15 20:24:37.336352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.336362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.336743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.336752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.337130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.337139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.337523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.337532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.337905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.337914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.338294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.338303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.338703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.338713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.339006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.339017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.339446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.339457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.339840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.339849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.340232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.340242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.340662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.340672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.341073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.341083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.341489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.341498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.341941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.341951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.342351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.342361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.342648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.342658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.343071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.343080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.343548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.343558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.343933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.343942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.344443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.344479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.344903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.344915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.345404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.345441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.345878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.345889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.346431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.346467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.346888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.346899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.347396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.347432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.347663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.347677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.348074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.348085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.348516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.348527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.348927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.348937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.349419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.349455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.349888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.349900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.350119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.350141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.350562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.350572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.350953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.350963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.351515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.351552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.351891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.351902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.352408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.352445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.352881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.352893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.353417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.353454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.003 qpair failed and we were unable to recover it. 00:29:40.003 [2024-07-15 20:24:37.353869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.003 [2024-07-15 20:24:37.353880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.354362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.354398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.354834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.354846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.355365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.355401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.355843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.355854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.356231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.356242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.356655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.356665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.357066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.357076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.357505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.357515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.357891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.357900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.358394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.358430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.358906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.358918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.359418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.359454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.359829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.359840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.360264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.360274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.360685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.360694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.360989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.361000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.361402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.361412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.361849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.361858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.362347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.362384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.362721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.362733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.363105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.363115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.363523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.363533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.363910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.363920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.364316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.364356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.364793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.364805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.365208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.365218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.365629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.365639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.366066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.366075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.366384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.366394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.366802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.366811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.367191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.367201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.367621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.367630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.368014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.368023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.368423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.368433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.368834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.368844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.369279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.369289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.369679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.369688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.369984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.369993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.370384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.370394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.370782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.370791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.371169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.004 [2024-07-15 20:24:37.371178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.004 qpair failed and we were unable to recover it. 00:29:40.004 [2024-07-15 20:24:37.371489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.371499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.371898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.371907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.372286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.372295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.372706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.372715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.373094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.373103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.373574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.373583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.374004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.374013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.374476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.374486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.374871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.374880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.375375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.375416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.375841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.375853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.376254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.376265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.376663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.376672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.377129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.377138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.377341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.377354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.377793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.377803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.378233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.378243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.378645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.378654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.379064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.379073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.379461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.379470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.379869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.379879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.380279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.380289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.380686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.380695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.381115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.381128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.381523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.381532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.381912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.381921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.382457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.382493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.382929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.382940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.383417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.383454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.383766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.383778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.384202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.384213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.384592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.384601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.384979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.384989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.385376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.385386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.385820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.385829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.386120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.386136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.386520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.386534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.386908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.386917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.387331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.387367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.387800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.387812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.388321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.005 [2024-07-15 20:24:37.388358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.005 qpair failed and we were unable to recover it. 00:29:40.005 [2024-07-15 20:24:37.388794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.388806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.389228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.389238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.389616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.389626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.389925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.389935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.390353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.390363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.390785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.390794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.391212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.391221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.391602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.391612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.391990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.391999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.392503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.392513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.392887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.392896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.393275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.393285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.393695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.393705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.394135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.394145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.394538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.394547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.394842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.394852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.395271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.395280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.395746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.395755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.396131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.396141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.396525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.396534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.396812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.396823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.397250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.397259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.397677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.397686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.397893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.397905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.398324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.398334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.398716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.398725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.399121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.399136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.399513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.399523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.399924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.399934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.400352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.400389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.400825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.400837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.401236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.401246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.401643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.401652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.402047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.402056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.402366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.402376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.402589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.402603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.403019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.403029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.403329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.403339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.403747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.403756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.404141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.404151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.404427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.404438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.006 [2024-07-15 20:24:37.404855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.006 [2024-07-15 20:24:37.404865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.006 qpair failed and we were unable to recover it. 00:29:40.007 [2024-07-15 20:24:37.405281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.007 [2024-07-15 20:24:37.405290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.007 qpair failed and we were unable to recover it. 00:29:40.007 [2024-07-15 20:24:37.405498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.007 [2024-07-15 20:24:37.405509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.007 qpair failed and we were unable to recover it. 00:29:40.007 [2024-07-15 20:24:37.405935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.007 [2024-07-15 20:24:37.405945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.007 qpair failed and we were unable to recover it. 00:29:40.007 [2024-07-15 20:24:37.406334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.007 [2024-07-15 20:24:37.406344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.007 qpair failed and we were unable to recover it. 00:29:40.007 [2024-07-15 20:24:37.406767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.007 [2024-07-15 20:24:37.406776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.007 qpair failed and we were unable to recover it. 00:29:40.007 [2024-07-15 20:24:37.407157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.007 [2024-07-15 20:24:37.407167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.007 qpair failed and we were unable to recover it. 00:29:40.007 [2024-07-15 20:24:37.407617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.007 [2024-07-15 20:24:37.407626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.007 qpair failed and we were unable to recover it. 00:29:40.007 [2024-07-15 20:24:37.408047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.007 [2024-07-15 20:24:37.408057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.007 qpair failed and we were unable to recover it. 00:29:40.007 [2024-07-15 20:24:37.408461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.007 [2024-07-15 20:24:37.408471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.007 qpair failed and we were unable to recover it. 00:29:40.007 [2024-07-15 20:24:37.408874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.007 [2024-07-15 20:24:37.408884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.007 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.409288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.409299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.409671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.409681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.410077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.410086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.410491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.410501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.410909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.410918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.411340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.411350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.411749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.411759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.412156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.412166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.412565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.412574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.412873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.412882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.413284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.413294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.413677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.413692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.414073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.414082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.414548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.414557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.414933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.414942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.415316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.415325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.415738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.415747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.416130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.416140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.416550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.416560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.416959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.416968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.417338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.417375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.417793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.417805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.418303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.418340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.418749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.418760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.419137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.419147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.419591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.419601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.419984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.419993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.420373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.420383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.420793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.420802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.421183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.421193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.421593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.421603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.422012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.422022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.422447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.422456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.422848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.422857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.423068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.423082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.278 [2024-07-15 20:24:37.423478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.278 [2024-07-15 20:24:37.423488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.278 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.423814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.423823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.424224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.424234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.424636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.424648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.425027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.425036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.425437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.425447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.425838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.425847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.426141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.426152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.426437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.426447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.426868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.426877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.427260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.427269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.427679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.427689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.428079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.428088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.428489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.428499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.428926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.428935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.429338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.429347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.429750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.429760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.430136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.430146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.430448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.430457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.430846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.430855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.431138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.431147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.431424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.431435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.431821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.431830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.432126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.432136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.432515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.432524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.432903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.432913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.433335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.433345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.433743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.433753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.434144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.434153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.434462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.434471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.434896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.434906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.435296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.435306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.435704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.435713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.436098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.436107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.436559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.436569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.436944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.436953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.437347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.437383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.437813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.437826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.438200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.438211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.438609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.438619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.439019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.279 [2024-07-15 20:24:37.439028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.279 qpair failed and we were unable to recover it. 00:29:40.279 [2024-07-15 20:24:37.439454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.439465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.439837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.439846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.440156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.440166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.440551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.440561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.440975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.440985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.441385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.441394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.441771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.441780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.442155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.442165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.442378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.442391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.442802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.442812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.443250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.443259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.443710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.443720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.444121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.444135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.444515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.444524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.444941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.444951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.445243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.445252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.445659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.445668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.446053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.446062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.446441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.446451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.446754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.446764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.447286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.447296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.447748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.447758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.448147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.448164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.448579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.448589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.448882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.448891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.449362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.449371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.449797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.449806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.450187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.450197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.450576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.450586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.451008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.451018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.451428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.451441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.451822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.451831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.452250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.452260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.452636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.452645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.453051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.453060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.453467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.453477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.453887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.453896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.454320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.454330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.454794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.454803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.455182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.455191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.455596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.280 [2024-07-15 20:24:37.455606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.280 qpair failed and we were unable to recover it. 00:29:40.280 [2024-07-15 20:24:37.455905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.455915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.456310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.456319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.456695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.456704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.457126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.457137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.457532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.457541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.457920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.457929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.458403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.458439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.458790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.458801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.459207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.459218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.459626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.459635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.460032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.460041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.460422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.460432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.460817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.460827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.461142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.461152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.461563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.461572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.461995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.462004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.462394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.462408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.462784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.462793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.463188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.463197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.463608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.463617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.464000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.464009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.464484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.464493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.464905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.464916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.465321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.465331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.465741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.465750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.466130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.466140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.466517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.466526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.466925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.466934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.467363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.467399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.467814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.467826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.468232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.468244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.468652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.468662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.469086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.469096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.469474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.469483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.469899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.469908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.470395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.470431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.470856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.470868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.471409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.471446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.471876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.471888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.472268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.472279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.281 qpair failed and we were unable to recover it. 00:29:40.281 [2024-07-15 20:24:37.472674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.281 [2024-07-15 20:24:37.472684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.473020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.473030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.473504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.473514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.473895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.473909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.474333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.474343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.474719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.474728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.475108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.475117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.475492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.475501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.475896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.475905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.476409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.476447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.476859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.476871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.477367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.477404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.477748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.477760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.478177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.478187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.478615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.478624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.479001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.479010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.479403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.479413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.479792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.479802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.480186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.480197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.480599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.480609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.481028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.481038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.481432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.481442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.481816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.481825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.482117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.482139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.482525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.482535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.482963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.482972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.483467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.483504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.483825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.483837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.484278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.484289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.484673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.484682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.485127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.485137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.485557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.485567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.485948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.485957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.486438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.282 [2024-07-15 20:24:37.486475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.282 qpair failed and we were unable to recover it. 00:29:40.282 [2024-07-15 20:24:37.486889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.486901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.487378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.487415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.487848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.487860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.488400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.488436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.488878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.488889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.489363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.489399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.489822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.489834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.490256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.490267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.490479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.490493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.490890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.490900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.491270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.491281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.491763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.491773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.492037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.492048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.492492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.492503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.492914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.492923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.493334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.493343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.493727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.493737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.494139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.494150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.494513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.494523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.494922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.494931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.495414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.495451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.495834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.495846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.496346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.496382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.496812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.496824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.497219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.497230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.497590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.497599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.497816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.497830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.498316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.498326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.498704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.498713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.499088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.499097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.499514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.499524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.499938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.499948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.500330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.500340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.500716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.500726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.501157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.501167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.501557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.501566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.501894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.501903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.502315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.502328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.502806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.502815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.503189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.503198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.503575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.283 [2024-07-15 20:24:37.503584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.283 qpair failed and we were unable to recover it. 00:29:40.283 [2024-07-15 20:24:37.503963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.503972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.504362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.504371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.504749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.504758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.505201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.505211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.505604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.505614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.506043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.506053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.506448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.506458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.506787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.506797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.507202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.507211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.507593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.507602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.507988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.507997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.508376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.508386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.508770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.508779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.509227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.509236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.509594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.509603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.510003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.510012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.510414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.510424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.510882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.510892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.511284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.511293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.511673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.511683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.512007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.512016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.512226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.512238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.512675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.512685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.513128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.513140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.513518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.513527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.513907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.513916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.514298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.514307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.514714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.514723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.515128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.515139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.515509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.515518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.515897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.515906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.516378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.516414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.516854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.516865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.517369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.517406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.517713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.517726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.518109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.518119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.518488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.518497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.518923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.518932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.519323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.519360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.519769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.519781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.520173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.284 [2024-07-15 20:24:37.520184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.284 qpair failed and we were unable to recover it. 00:29:40.284 [2024-07-15 20:24:37.520651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.520660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.521041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.521050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.521440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.521449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.521855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.521865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.522287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.522297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.522672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.522682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.523158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.523167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.523551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.523561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.523933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.523942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.524315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.524325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.524726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.524736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.525101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.525111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.525560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.525570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.526022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.526032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.526412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.526422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.526798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.526808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.527191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.527201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.527584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.527595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.528057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.528066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.528492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.528502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.528869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.528879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.529288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.529298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.529600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.529610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.530005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.530014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.530470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.530480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.530918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.530926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.531207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.531218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.531608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.531618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.531998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.532007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.532410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.532419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.532795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.532804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.533179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.533188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.533595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.533605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.534001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.534010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.534387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.534396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.534771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.534780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.535159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.535169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.535583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.535592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.535971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.535981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.536322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.536332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.285 [2024-07-15 20:24:37.536539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.285 [2024-07-15 20:24:37.536552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.285 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.536992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.537001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.537392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.537402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.537775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.537783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.538164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.538174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.538663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.538672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.539050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.539060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.539377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.539387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.539795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.539804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.540214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.540224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.540696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.540708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.541092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.541101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.541513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.541522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.541898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.541907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.542283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.542292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.542682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.542692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.543117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.543139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.543524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.543533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.543907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.543916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.544404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.544440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.544734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.544747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.545158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.545168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.545551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.545560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.545982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.545992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.546411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.546421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.546830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.546839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.547242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.547253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.547667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.547677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.548106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.548115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.548542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.548552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.548961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.548971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.549504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.549542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.549964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.549976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.550472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.550509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.550929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.550942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.551350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.286 [2024-07-15 20:24:37.551387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.286 qpair failed and we were unable to recover it. 00:29:40.286 [2024-07-15 20:24:37.551790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.551803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.552374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.552415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.552825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.552838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.553239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.553251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.553634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.553645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.554085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.554095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.554456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.554467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.554867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.554878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.555182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.555193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.555593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.555603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.556005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.556014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.556436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.556447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.556870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.556880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.557284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.557294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.557740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.557750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.558155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.558166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.558664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.558674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.559093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.559103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.559511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.559521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.559923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.559932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.560424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.560460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.560893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.560905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.561416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.561452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.561885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.561897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.562414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.562451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.562863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.562875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.563423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.563459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.563893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.563905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.564434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.564475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.564895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.564907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.565381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.565417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.565636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.565649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.566053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.566063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.566499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.566510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.566889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.566899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.567197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.567207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.567591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.567601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.568005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.568015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.568429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.568438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.568838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.568848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.287 [2024-07-15 20:24:37.569250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.287 [2024-07-15 20:24:37.569260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.287 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.569649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.569659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.569952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.569962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.570367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.570377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.570756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.570765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.571190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.571199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.571607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.571617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.572013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.572022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.572465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.572475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.572889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.572899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.573333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.573342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.573719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.573728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.574195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.574205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.574632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.574642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.575040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.575049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.575453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.575464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.575837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.575847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.576272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.576282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.576699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.576709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.577169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.577180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.577420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.577430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.577843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.577852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.578225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.578235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.578658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.578668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.579048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.579058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.579453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.579464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.579862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.579871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.580205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.580214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.580596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.580606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.581006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.581016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.581407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.581417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.581783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.581792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.582099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.582109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.582520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.582531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.582850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.582860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.583281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.583290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.583716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.583725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.584100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.584109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.584519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.584528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.584929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.584938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.585472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.288 [2024-07-15 20:24:37.585508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.288 qpair failed and we were unable to recover it. 00:29:40.288 [2024-07-15 20:24:37.585941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.585953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.586529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.586566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.586969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.586981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.587541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.587577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.588013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.588025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.588471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.588482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.588863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.588872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.589393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.589430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.589684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.589697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.589964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.589974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.590286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.590297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.590693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.590702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.591005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.591014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.591443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.591453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.591838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.591847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.592227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.592241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.592646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.592655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.593055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.593065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.593439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.593449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.593874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.593885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.594290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.594300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.594706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.594717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.595032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.595041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.595463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.595473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.595851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.595862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.596266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.596276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.596671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.596680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.597059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.597069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.597370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.597380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.597800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.597810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.598184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.598194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.598581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.598592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.598991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.599001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.599440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.599450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.599829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.599838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.600272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.600281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.600690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.600700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.601103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.601115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.601585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.601595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.601985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.601995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.602486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.602523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.289 qpair failed and we were unable to recover it. 00:29:40.289 [2024-07-15 20:24:37.602975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.289 [2024-07-15 20:24:37.602986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.603475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.603517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.603910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.603923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.604378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.604414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.604824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.604836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.605348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.605385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.605812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.605824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.606216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.606227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.606629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.606640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.607041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.607051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.607444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.607454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.607936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.607946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.608333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.608343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.608731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.608741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.609051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.609062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.609356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.609369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.609768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.609778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.610155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.610164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.610577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.610586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.610982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.610991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.611389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.611399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.611782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.611791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.612169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.612178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.612583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.612594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.613001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.613010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.613431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.613441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.613865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.613874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.614294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.614304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.614678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.614687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.615093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.615102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.615539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.615549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.615923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.615933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.616224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.616237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.616642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.616652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.617075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.617084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.617524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.617533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.617915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.617924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.618419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.618456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.618887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.618899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.619360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.619396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.619821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.619833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.290 qpair failed and we were unable to recover it. 00:29:40.290 [2024-07-15 20:24:37.620229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.290 [2024-07-15 20:24:37.620240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.620668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.620679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.620950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.620961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.621348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.621359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.621770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.621780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.622228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.622238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.622629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.622638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.623016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.623025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.623333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.623344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.623598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.623609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.623911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.623921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.624326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.624336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.624710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.624720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.625105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.625114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.625557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.625566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.625938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.625948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.626344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.626380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.626811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.626823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.627201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.627212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.627588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.627598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.628003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.628013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.628477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.628487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.628868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.628877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.629171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.629182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.629585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.629594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.629978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.629987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.630365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.630375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.630795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.630805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.631183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.631197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.631589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.631599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.632002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.632012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.632390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.632400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.632815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.632825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.633247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.633256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.633640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.633649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.291 qpair failed and we were unable to recover it. 00:29:40.291 [2024-07-15 20:24:37.634068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.291 [2024-07-15 20:24:37.634077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.634461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.634471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.634891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.634900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.635295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.635305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.635481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.635492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.635934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.635943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.636364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.636373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.636789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.636798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.637127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.637137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.637557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.637566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.637966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.637975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.638500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.638537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.638887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.638899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.639318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.639330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.639783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.639793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.640283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.640319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.640761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.640773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.641195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.641205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.641591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.641601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.641974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.641984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.642344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.642358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.642760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.642770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.643195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.643205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.643589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.643599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.644061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.644071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.644481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.644491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.644909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.644919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.645309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.645319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.645610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.645619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.646025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.646035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.646432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.646442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.646828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.646838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.647252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.647261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.647643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.647652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.648034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.648044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.648454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.648463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.648887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.648896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.649103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.649114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.649520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.649531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.649931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.649941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.650420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.292 [2024-07-15 20:24:37.650457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.292 qpair failed and we were unable to recover it. 00:29:40.292 [2024-07-15 20:24:37.650896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.650907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.651411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.651447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.651881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.651893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.652329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.652366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.652840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.652851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.653350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.653386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.653820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.653837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.654231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.654242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.654644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.654654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.655071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.655080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.655296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.655310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.655729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.655738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.656108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.656117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.656482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.656491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.656905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.656915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.657406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.657442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.657874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.657885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.658248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.658259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.658666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.658676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.659059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.659068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.659451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.659461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.659853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.659862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.660237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.660247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.660669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.660678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.661064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.661074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.661468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.661478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.661797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.661807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.662234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.662244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.662624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.662633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.663051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.663060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.663444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.663454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.663832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.663841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.664135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.664146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.664543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.664552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.664927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.664937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.665313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.665323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.665717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.665727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.666115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.666129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.666547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.666556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.666932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.666941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.667410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.667446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.293 qpair failed and we were unable to recover it. 00:29:40.293 [2024-07-15 20:24:37.667882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.293 [2024-07-15 20:24:37.667894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.668409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.668445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.668879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.668891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.669344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.669380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.669813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.669824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.670245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.670255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.670545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.670555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.670878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.670887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.671386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.671396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.671773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.671783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.672186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.672197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.672599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.672609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.673029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.673039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.673433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.673443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.673728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.673739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.674138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.674148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.674596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.674605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.674982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.674991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.675369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.675380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.675798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.675808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.676257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.676267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.676678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.676687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.677116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.677135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.677478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.677488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.677851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.677861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.678243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.678253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.678659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.678668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.679063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.679072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.679495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.679504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.679899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.679908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.680422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.680458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.680892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.680904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.681296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.681332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.681752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.681769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.682073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.682083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.682487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.682498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.682943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.682953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.683487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.683524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.683950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.683964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.684470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.684506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.684939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-07-15 20:24:37.684950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.294 qpair failed and we were unable to recover it. 00:29:40.294 [2024-07-15 20:24:37.685450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.685486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.685926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.685938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.686468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.686504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.686934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.686946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.687437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.687473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.687908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.687922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.688420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.688456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.688955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.688967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.689454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.689491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.689933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.689945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.690417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.690454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.690906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.690918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.691404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.691441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.691873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.691885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.692342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.692378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.692807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.692819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.693197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.693208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.693617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.693626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.694006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.694015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.694481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.694495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.694888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.694898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.695370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.695381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.695765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.695774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.696068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.696080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.696483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.696493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.696867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.696877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.697358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.697394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.697821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.697832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.698135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.698146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.698548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.698558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.295 qpair failed and we were unable to recover it. 00:29:40.295 [2024-07-15 20:24:37.698948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-07-15 20:24:37.698957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.296 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.699392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.699403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.699711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.699721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.700127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.700137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.700540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.700549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.700930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.700940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.701479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.701518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.701936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.701948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.702420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.702456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.702888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.702899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.703332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.703368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.703803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.703815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.704338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.704374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.704800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.704812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.705228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.705239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.705678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.705689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.706097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.706108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.706536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.706546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.706923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.706932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.707416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.707452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.707893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.707905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.708443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.708480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.708918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.708930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.709410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.709447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.709695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.709709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.709975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.709985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.710411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.710421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.710800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.710810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.711119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.711136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.711551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.711561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.711991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.712001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.712387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.712423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.712853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.712866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.713376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.713413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.713847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.713858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.567 qpair failed and we were unable to recover it. 00:29:40.567 [2024-07-15 20:24:37.714352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.567 [2024-07-15 20:24:37.714389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.714821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.714832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.715346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.715383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.715823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.715835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.716233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.716244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.716643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.716653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.717081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.717091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.717494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.717504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.717937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.717948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.718344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.718380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.718806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.718819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.719226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.719237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.719631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.719641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.720062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.720072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.720368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.720378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.720782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.720791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.721168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.721180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.721600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.721610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.722036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.722046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.722527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.722537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.722912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.722922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.723301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.723310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.723683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.723698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.724012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.724021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.724414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.724425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.724818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.724827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.725207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.725217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.725612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.725622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.726048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.726058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.726463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.726473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.726798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.726808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.727216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.727232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.727635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.727646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.728036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.728046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.728449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.728460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.728861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.728871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.729274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.729287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-07-15 20:24:37.729703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-07-15 20:24:37.729712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.730093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.730104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.730507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.730518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.730912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.730922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.731424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.731460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.731901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.731913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.732465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.732501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.732935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.732946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.733448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.733485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.733851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.733862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.734334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.734371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.734742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.734755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.735019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.735035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.735435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.735447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.735896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.735906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.736331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.736341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.736723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.736732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.737118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.737139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.737426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.737436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.737836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.737845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.738338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.738375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.738802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.738813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.739232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.739243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.739651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.739661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.739884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.739899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.740302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.740313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.740711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.740721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.741022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.741031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.741450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.741460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.741907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.741917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.742306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.742316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.742709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.742718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.743031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.743040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.743439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.743448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.743853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.743862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.744252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.744263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.744669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.744679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.745060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.745069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.745452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.745462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.745949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.745961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.746455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.746492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-07-15 20:24:37.746911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-07-15 20:24:37.746923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.747427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.747464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.747898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.747911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.748324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.748360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.748793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.748804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.749186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.749196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.749614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.749624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.750002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.750011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.750372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.750382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.750783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.750793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.751172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.751182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.751625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.751635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.752054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.752064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.752494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.752504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.752886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.752895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.753277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.753289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.753709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.753718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.754092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.754102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.754497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.754508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.754823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.754834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.755260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.755269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.755581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.755591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.756013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.756023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.756425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.756434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.756817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.756826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.757116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.757129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.757400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.757411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.757786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.757796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.758174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.758184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.758590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.758599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.758892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.758902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.759363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.759373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.759754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.759764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.760191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.760201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.760597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.760607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.761042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.761052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.761521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.761531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.761930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.761941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-07-15 20:24:37.762346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-07-15 20:24:37.762355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.762737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.762747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.763130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.763140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.763535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.763544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.763824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.763833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.764345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.764382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.764817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.764829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.765254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.765265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.765700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.765711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.766018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.766028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.766432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.766443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.766829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.766839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.767247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.767257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.767713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.767722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.768159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.768169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.768475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.768484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.768798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.768807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.769210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.769220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.769610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.769621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.770001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.770010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.770406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.770416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.770832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.770841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.771220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.771231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.771716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.771725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.772102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.772111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.772550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.772560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.772943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.772953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.773328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.773338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.773755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.773767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.774173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.774184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.774651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.774660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.775029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.775038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.775381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.775392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.775790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.775799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-07-15 20:24:37.776130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-07-15 20:24:37.776140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.776517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.776527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.776935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.776944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.777451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.777487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.777736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.777751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.778147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.778158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.778566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.778575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.778952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.778962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.779409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.779419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.779796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.779806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.780207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.780216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.780535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.780544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.780948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.780958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.781360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.781369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.781801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.781811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.782238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.782249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.782667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.782676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.782968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.782977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.783382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.783391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.783690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.783699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.784089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.784098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.784399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.784411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.784801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.784811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.785213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.785223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.785636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.785645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.785928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.785938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.786226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.786236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.786524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.786533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.786932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.786941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.787330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.787339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.787734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.787743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.788148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.788158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.788542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.788551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.788960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.788969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.789358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.789368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.789765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.789774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.790168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.790178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.790587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.790596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-07-15 20:24:37.790908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-07-15 20:24:37.790917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.791316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.791326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.791772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.791781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.792183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.792193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.792503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.792512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.792899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.792908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.793329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.793339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.793745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.793754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.794156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.794166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.794587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.794596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.794998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.795007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.795382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.795392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.795699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.795708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.796107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.796116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.796558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.796568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.796876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.796886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.797281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.797291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.797712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.797721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.798026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.798035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.798456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.798466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.798748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.798768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.799166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.799176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.799494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.799503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.799903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.799912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.800221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.800231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.800627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.800636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.801031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.801040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.801442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.801452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.801831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.801840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.802217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.802226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.802651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.802660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.803036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.803045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.803482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.803491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.803950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.803960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.804353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.804363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.804795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.804804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.805178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.805187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.805587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.805597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.805978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.805988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.806365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.806374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.806853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.806862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-07-15 20:24:37.807322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-07-15 20:24:37.807359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.807767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.807780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.808186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.808197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.808617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.808627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.809021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.809030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.809428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.809438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.809828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.809837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.810246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.810256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.810472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.810486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.810899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.810908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.811337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.811351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.811732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.811742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.812146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.812156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.812569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.812578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.812985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.812994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.813510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.813520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.813980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.813990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.814379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.814391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.814799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.814809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.815116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.815136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.815559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.815568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.815949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.815958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.816366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.816403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.816840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.816852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.817323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.817360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.817801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.817814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.818193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.818205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.818576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.818587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.818988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.818997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.819416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.819426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.819792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.819802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.820071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.820089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.820383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.820393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.820782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.820791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.821167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.821177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.821570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.821580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.821964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.821974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.822385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.822401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.822614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.822628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.823035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.823044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.823432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.823442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-07-15 20:24:37.823825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-07-15 20:24:37.823834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.824139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.824149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.824547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.824556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.824934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.824943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.825335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.825345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.825734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.825744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.825943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.825954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.826397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.826407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.826804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.826814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.827188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.827198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.827551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.827561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.827952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.827961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.828334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.828344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.828749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.828759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.829235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.829245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.829659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.829669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.830061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.830070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.830475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.830484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.830939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.830948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.831322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.831331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.831553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.831564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.831987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.831996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.832427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.832436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.832812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.832826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.833317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.833353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.833794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.833806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.834184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.834196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.834600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.834610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.834989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.834998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.835290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.835300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.835726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.835736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.836026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.836037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.836335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.836345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.836819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.836828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.837205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.837215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.837597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.837606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.837987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.837996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.839282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-07-15 20:24:37.839307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-07-15 20:24:37.839685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.839696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.840072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.840081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.840392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.840402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.840801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.840810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.841187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.841197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.841607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.841617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.842043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.842053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.842361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.842371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.842785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.842795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.843197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.843206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.843636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.843646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.843948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.843957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.844348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.844358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.844817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.844826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.845172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.845182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.845574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.845584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.845985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.845995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.846378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.846388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.846816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.846825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.847259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.847269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.847702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.847711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.848104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.848113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.848537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.848547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.849010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.849020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.849398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.849408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.849704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.849715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.850036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.850045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.850462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.850473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.850855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.850864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.851260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.851270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.851647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.851656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.852036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.852045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.852532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.852542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.853020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.853029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.853476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.853486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.853865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.853874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.854275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.854284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.854687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.854697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-07-15 20:24:37.855117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-07-15 20:24:37.855140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.855535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.855544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.855962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.855972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.856387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.856424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.856894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.856905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.857376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.857413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.857850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.857862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.858352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.858388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.858832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.858844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.859311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.859348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.859773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.859785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.860194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.860204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.860687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.860696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.861118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.861134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.861521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.861531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.861926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.861939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.862489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.862526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.862958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.862969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.863480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.863518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.863858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.863870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.864361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.864398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.864656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.864669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.864966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.864976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.865302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.865313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.865705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.865714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.866134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.866145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.866524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.866533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.866940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.866949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.867424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.867461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.867896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.867908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.868415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.868451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.868888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.868899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.869415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.869452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.869867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.869879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.870410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.870446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.870877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.870889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.871394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.871431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.871868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.871880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.872369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.872406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.872837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.872850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.873272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-07-15 20:24:37.873283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-07-15 20:24:37.873667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.873677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.873985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.873998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.874390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.874400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.874797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.874807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.875101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.875110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.875521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.875531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.875919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.875928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.876402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.876439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.876851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.876864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.877375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.877412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.877846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.877858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.878352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.878389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.878841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.878854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.879282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.879293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.879700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.879710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.880003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.880014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.880437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.880446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.880881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.880891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.881266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.881276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.881699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.881709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.882086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.882095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.882480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.882490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.882870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.882879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.883260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.883271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.883660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.883670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.884071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.884081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.884491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.884501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.884935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.884944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.885411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.885448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.885880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.885892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.886399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.886436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.886877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.886889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.887370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.887407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.887745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.887758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.888084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.888094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.888522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.888532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.888911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.888920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.889456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.889492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.889925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.889937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.890430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.890467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-07-15 20:24:37.890874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-07-15 20:24:37.890886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.891429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.891465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.891900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.891912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.892407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.892443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.892878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.892890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.893416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.893452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.893886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.893897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.894442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.894479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.894919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.894932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.895424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.895461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.895885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.895897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.896417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.896454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.896891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.896903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.897334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.897371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.897545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.897557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.897933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.897942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.898430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.898441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.898831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.898841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.899221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.899230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.899658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.899667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.900145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.900154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.900564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.900573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.900989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.900998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.901379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.901389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.901790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.901800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.902120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.902142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.902578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.902587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.902949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.902959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.903386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.903423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.903857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.903872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.904351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.904387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.904816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.904828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.905344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.905381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.905820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.905832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.906261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.906271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.906574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.906584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.906999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.907009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.907392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.907402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.907795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.907804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.908195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.908204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.908613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.908622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-07-15 20:24:37.909026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-07-15 20:24:37.909036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.580 [2024-07-15 20:24:37.909434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-07-15 20:24:37.909444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-07-15 20:24:37.909864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-07-15 20:24:37.909874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-07-15 20:24:37.910179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-07-15 20:24:37.910189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-07-15 20:24:37.910610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-07-15 20:24:37.910619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-07-15 20:24:37.910922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-07-15 20:24:37.910931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-07-15 20:24:37.911354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-07-15 20:24:37.911365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-07-15 20:24:37.911740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-07-15 20:24:37.911749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-07-15 20:24:37.912130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-07-15 20:24:37.912139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-07-15 20:24:37.912542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-07-15 20:24:37.912551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-07-15 20:24:37.912931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-07-15 20:24:37.912940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-07-15 20:24:37.913447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-07-15 20:24:37.913484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-07-15 20:24:37.913918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-07-15 20:24:37.913930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.914448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.914484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.914909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.914921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.915417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.915459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.915850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.915862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.916356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.916392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.916822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.916834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.917151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.917161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.917590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.917600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.917976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.917985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.918356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.918367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.918766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.918775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.919146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.919156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.919555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.919565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.919944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.919954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.920335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.920344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.920749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.920759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.921164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.921174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.921588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.921599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.921982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.921991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.922366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.922376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.922762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.922771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.923148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.923158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.923576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.923586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.924003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.924012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.924298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.924315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.924687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.924696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.925076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.925085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.925492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.925502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.925895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.925905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.926328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.926343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.926711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.926721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.927095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.927104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.927604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.927614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.927995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.928005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.928400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.928411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.928805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.928814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.929103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.929113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.929428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.929438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.929844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.929854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.930333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-07-15 20:24:37.930370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-07-15 20:24:37.930812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.930824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.931254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.931265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.931675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.931684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.932063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.932073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.932371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.932382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.932835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.932845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.933255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.933264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.933654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.933664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.934070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.934080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.934260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.934270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.934677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.934687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.935068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.935077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.935366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.935376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.935797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.935807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.936107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.936117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.936501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.936510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.936821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.936830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.937253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.937264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.937668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.937678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.938062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.938071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.938474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.938483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.938877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.938887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.939282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.939292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.939725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.939735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.940141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.940151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.940555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.940565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.940943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.940953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.941344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.941354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.941778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.941788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.942168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.942178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.942484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.942496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.942898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.942907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.943320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.943330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.943710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.943719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.944102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.944112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.944545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.944554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-07-15 20:24:37.944938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-07-15 20:24:37.944947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.945443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.945480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.945953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.945964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.946450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.946487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.946914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.946926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.947459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.947495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.947826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.947838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.948383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.948420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.948855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.948866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.949245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.949256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.949679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.949689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.950067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.950076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.950535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.950545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.950938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.950948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.951449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.951485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.951921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.951933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.952153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.952168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.952546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.952556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.952925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.952934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.953330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.953340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.953716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.953726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.954103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.954117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.954559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.954570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.955014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.955024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.955426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.955436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.955863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.955873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.956317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.956354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.956702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.956713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.956950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.956962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.957254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.957264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.957676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.957685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.957980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.957989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.958325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.958336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.958718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.958727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.958993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.959002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.959391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.959401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.959778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.959787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.960157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.960167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.960542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.960551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.960954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.960964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-07-15 20:24:37.961354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-07-15 20:24:37.961363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.961790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.961799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.962181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.962190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.962432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.962441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.962906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.962916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.963322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.963332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.963738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.963747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.964149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.964159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.964421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.964433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.964743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.964753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.965217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.965228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.965620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.965629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.966063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.966072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.966387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.966397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.966804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.966814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.967110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.967120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.967433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.967443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.967823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.967832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.968235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.968244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.968664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.968673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.969079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.969089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.969581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.969591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.970001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.970011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.970471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.970481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.970888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.970897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.971397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.971433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.971867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.971878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.972256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.972266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.972646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.972655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.973076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.973085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.973515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.973525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.973912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.973922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.974396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.974433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.974858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.974870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.975372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.975409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.975847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.975859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.976324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.976335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.976731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.976740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.977127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.977137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.977523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.977532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.977933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-07-15 20:24:37.977942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-07-15 20:24:37.978362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.978399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.978846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.978859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.979370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.979407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.979796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.979807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.980227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.980238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.980620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.980629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.981008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.981018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.981412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.981421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.981803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.981813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.982227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.982238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.982715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.982725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.983033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.983043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.983451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.983462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.983860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.983870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.984260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.984270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.984672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.984681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.985060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.985069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.985447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.985458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.985767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.985776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.986157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.986167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.986568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.986578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.986987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.986996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-07-15 20:24:37.987375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-07-15 20:24:37.987385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.987761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.987772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.988155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.988165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.988570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.988579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.988967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.988976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.989380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.989390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.989807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.989817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.990298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.990308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.990760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.990770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.991073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.991082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.991491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.991501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.991875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.991884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.992400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.992436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.992868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.992884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.993403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.993440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.993876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.993888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.994387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.994423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.994855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.994867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.995358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.995395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.995834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.995846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.862 qpair failed and we were unable to recover it. 00:29:40.862 [2024-07-15 20:24:37.996166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.862 [2024-07-15 20:24:37.996177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:37.996574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:37.996583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:37.997002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:37.997012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:37.997413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:37.997423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:37.997808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:37.997818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:37.998148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:37.998159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:37.998557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:37.998566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:37.998970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:37.998979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:37.999403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:37.999414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:37.999856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:37.999866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.000291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.000301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.000705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.000714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.001160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.001169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.001558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.001568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.001953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.001962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.002263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.002274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.002542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.002553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.003004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.003013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.003407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.003416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.003793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.003803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.004199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.004212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.004522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.004532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.004961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.004971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.005221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.005230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.005635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.005644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.005899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.005909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.006333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.006344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.006625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.006635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.007019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.007028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.007437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.007446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.007863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.007873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.008247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.008257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.008664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.008673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.009072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.009081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.009469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.009478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.009853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.009863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.010243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.010252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.010677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.010687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.010995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.011004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.011398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.011407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.011802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.863 [2024-07-15 20:24:38.011811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.863 qpair failed and we were unable to recover it. 00:29:40.863 [2024-07-15 20:24:38.012232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.012242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.012625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.012635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.013049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.013058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.013347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.013357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.013744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.013753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.014172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.014182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.014562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.014575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.014977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.014987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.015253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.015263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.015693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.015703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.015973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.015983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.016363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.016373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.016770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.016779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.017174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.017184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.017555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.017564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.017935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.017945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.018332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.018342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.018633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.018643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.019033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.019042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.019501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.019510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.019802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.019812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.020244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.020254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.020663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.020672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.021065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.021074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.021485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.021495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.021900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.021910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.022307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.022317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.022694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.022703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.023095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.023104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.023508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.023519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.023924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.023934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.024442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.024478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.024911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.024923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.025451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.025488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.025838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.025849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.026379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.026415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.026849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.026860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.027242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.027252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.027656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.027666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.028049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.028059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.864 [2024-07-15 20:24:38.028499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.864 [2024-07-15 20:24:38.028509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.864 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.028938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.028948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.029450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.029486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.029920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.029931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.030421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.030457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.030882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.030894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.031369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.031406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.031841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.031858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.032337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.032373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.032807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.032819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.033221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.033232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.033668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.033678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.034116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.034133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.034451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.034460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.034823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.034832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.035207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.035217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.035622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.035631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.035923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.035935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.036338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.036348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.036605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.036616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.037010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.037020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.037377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.037390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.037798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.037808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.038198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.038208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.038609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.038618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.039001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.039010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.039399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.039408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.039831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.039840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.040216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.040226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.040658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.040668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.041047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.041056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.041427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.041436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.041729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.041738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.042117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.042133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.042531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.042543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.042919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.042928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.043325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.043335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.043713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.043723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.044129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.044139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.044517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.044526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.044903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.044913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-07-15 20:24:38.045416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-15 20:24:38.045453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.045891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.045903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.046377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.046414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.046849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.046861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.047348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.047385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.047789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.047801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.048219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.048229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.048653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.048663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.049094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.049103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.049496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.049507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.049930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.049940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.050460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.050497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.050932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.050944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.051441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.051477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.051924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.051935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.052248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.052259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.052602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.052611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.053024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.053033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.053411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.053420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.053860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.053870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.054250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.054264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.054663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.054673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.054978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.054987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.055360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.055370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.055793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.055803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.056028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.056043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.056417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.056428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.056835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.056845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.057282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.057291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.057667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.057676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.058054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.058063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.058453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.058463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-07-15 20:24:38.058758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-07-15 20:24:38.058769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.059208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.059218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.059605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.059614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.060008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.060017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.060474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.060484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.060854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.060864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.061239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.061249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.061666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.061675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.062063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.062072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.062362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.062373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.062790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.062800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.063072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.063084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.063495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.063505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.063897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.063906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.064291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.064300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.064731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.064740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.065130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.065140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.065372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.065383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.065826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.065835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.066233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.066243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.066628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.066637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.067074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.067084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.067474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.067484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.067893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.067902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.068411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.068447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.068781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.068793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.069165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.069175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.069586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.069596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.069984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.069994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.070384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.070395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.070793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.070802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.071214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.071224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.071648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.071657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.071971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.071981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.072277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.072286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.072690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.072699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.073115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.073137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.073494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.073504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.073804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.073813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.074062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.074071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.074476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-07-15 20:24:38.074486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-07-15 20:24:38.074884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.074893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.075285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.075295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.075695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.075705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.076138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.076148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.076550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.076560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.076950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.076959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.077511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.077548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.078056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.078067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.078591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.078628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.078884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.078898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.079238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.079249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.079635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.079645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.079886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.079896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.080296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.080305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.080717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.080726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.081025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.081039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.081553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.081563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.081945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.081955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.082368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.082378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.082769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.082778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.083184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.083194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.083579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.083588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.083989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.083998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.084439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.084449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.084857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.084867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.085386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.085423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.085857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.085869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.086456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.086492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.086906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.086918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.087439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.087476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.087914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.087926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.088487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.088524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.088959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.088971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.089366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.089403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.089808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.089821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.090134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.090144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.090558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.090568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.090954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.090964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.091571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.091607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.091904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-07-15 20:24:38.091917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-07-15 20:24:38.092426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.092463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.092802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.092814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.093345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.093388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.093808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.093819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.094208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.094218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.094630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.094640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.094919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.094929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.095335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.095345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.095726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.095735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.096138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.096148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.096548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.096557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.096977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.096986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.097293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.097303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.097632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.097641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.098015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.098024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.098482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.098491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.098875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.098885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.099264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.099274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.099538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.099548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.099973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.099983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.100393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.100404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.100782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.100791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.101167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.101177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.101583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.101593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.102005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.102014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.102338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.102348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.102642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.102651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.103035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.103044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.103501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.103511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.103961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.103973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.104405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.104415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.104807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.104817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.105245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.105255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.105668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.105677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.106082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.106092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.106490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.106500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.106876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.106886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.107282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.107293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.107688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.107697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.108078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-07-15 20:24:38.108088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-07-15 20:24:38.108485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.108495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.108873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.108882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.109390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.109426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.109722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.109736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.110139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.110150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.110535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.110544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.110835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.110845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.111136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.111145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.111530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.111540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.111933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.111943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.112352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.112362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.112790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.112799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.113067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.113078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.113527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.113537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.113914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.113924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.114248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.114257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.114675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.114685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.114993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.115003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.115514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.115524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.115909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.115919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.116326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.116336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.116720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.116729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.117111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.117120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.117424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.117434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.117828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.117838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.118213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.118222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.118631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.118640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.119024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.119034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.119437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.119447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.119849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.119859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.120263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.120275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.120666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.120677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.121053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.121062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.121350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.121359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.121794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.121804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.122189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.122199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.122603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.122612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.123048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.123057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.123459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.123469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.123900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.123910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-07-15 20:24:38.124179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-07-15 20:24:38.124190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.124624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.124633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.125034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.125044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.125356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.125366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.125743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.125752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.126145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.126154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.126560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.126569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.126951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.126960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.127430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.127439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.127828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.127837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.128218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.128228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.128602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.128611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.129028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.129037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.129441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.129450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.129825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.129834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.130217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.130227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.130462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.130474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.130850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.130862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.131243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.131252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.131746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.131756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.132159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.132168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.132568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.132577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.132959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.132968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.133359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.133369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.133745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.133754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.134130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.134139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.134545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.134554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.134987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.134996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.135378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.135388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.135769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.135779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.136179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.136189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.136618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.136628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.137103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.137112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.137506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.137516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.137893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-07-15 20:24:38.137902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-07-15 20:24:38.138294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.138304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.138564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.138574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.138963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.138972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.139277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.139287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.139715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.139724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.140101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.140110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.140550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.140560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.140939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.140948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.141439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.141475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.141912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.141927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.142414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.142450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.142884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.142895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.143398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.143443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.143749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.143761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.144168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.144180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.144582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.144592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.144962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.144971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.145348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.145358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.145785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.145796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.146214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.146225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.146645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.146655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.147050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.147060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.147478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.147488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.147943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.147952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.148394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.148404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.148781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.148790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.149288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.149325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.149771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.149783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.150216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.150227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.150442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.150456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.150889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.150899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.151286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.151296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.151720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.151729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.152099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.152108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.152413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.152424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.152717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.152726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.153144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.153154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.153563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.153572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.153949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.153958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.154335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.154344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.154738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-07-15 20:24:38.154748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-07-15 20:24:38.155162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.155173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.155566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.155576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.155953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.155962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.156344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.156354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.156755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.156764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.157173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.157183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.157586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.157596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.157987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.157997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.158260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.158271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.158720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.158729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.159108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.159117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.159548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.159557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.159935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.159944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.160436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.160473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.160914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.160925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.161448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.161484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.161920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.161933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.162414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.162450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.162885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.162897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.163324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.163361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.163844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.163857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.164352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.164389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.164862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.164874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.165392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.165429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.165865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.165877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.166293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.166330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.166640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.166652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.167056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.167066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.167464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.167474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.167908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.167918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.168324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.168334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.168708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.168717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.169113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.169127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.169636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.169647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.169943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.169953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.170464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.170501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.170935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.170951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.171435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.171472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.171900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.171912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-07-15 20:24:38.172440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-07-15 20:24:38.172477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.172734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.172747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.173160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.173172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.173574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.173584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.173964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.173974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.174352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.174363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.174751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.174762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.175162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.175171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.175583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.175592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.175991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.176001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.176391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.176402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.176797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.176807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.177189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.177198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.177506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.177516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.177936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.177946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.178323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.178333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.178743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.178753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.179151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.179160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.179559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.179569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.179966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.179976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.180358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.180368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.180832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.180843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.181268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.181278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.181657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.181666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.182048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.182060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.182487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.182496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.182879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.182889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.183266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.183276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.183658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.183667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.184088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.184098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.184523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.184533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.184958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.184969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.185447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.185483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.185915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.185926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.186394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.186430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.186865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.186877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.187337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.187373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.187816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.187827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.188206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.188218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.188620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.188630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-07-15 20:24:38.189063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-07-15 20:24:38.189072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.189468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.189479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.189869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.189879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.190279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.190289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.190679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.190688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.190904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.190917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.191325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.191335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.191716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.191726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.192108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.192118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.192541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.192551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.192926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.192935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.193431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.193467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.193897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.193910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.194419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.194456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.194768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.194780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.195185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.195196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.195498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.195508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.195887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.195897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.196292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.196303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.196702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.196711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.197091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.197100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.197477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.197487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.197864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.197873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.198255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.198265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.198734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.198743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.199119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.199134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.199519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.199528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.199789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.199800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.200020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.200033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.200430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.200441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.200840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.200850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.201251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.201261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.201649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.201658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.202066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.202075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-07-15 20:24:38.202473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-07-15 20:24:38.202483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.202891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.202900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.203326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.203336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.203748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.203758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.204158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.204168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.204565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.204575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.204955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.204964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.205384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.205394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.205771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.205780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.206159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.206169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.206594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.206603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.206982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.206991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.207411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.207421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.207891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.207900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.208404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.208441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.208841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.208853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.209363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.209400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.209836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.209847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.210236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.210251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.210550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.210560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.211035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.211045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.211425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.211435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.211859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.211869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.212291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.212300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.212687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.212697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.213098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.213109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.213532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.213543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.213937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.213946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.214509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.214546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.214896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.214908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.215366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.215403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.215821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.215833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.216240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.216251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.216671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.216681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.216901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.216915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.217323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.217334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.217719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.217728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.218155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-07-15 20:24:38.218164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-07-15 20:24:38.218584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.218593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.218994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.219004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.219425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.219436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.219857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.219867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.220285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.220294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.220672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.220681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.220982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.220991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.221387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.221400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.221780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.221790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.222004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.222015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.222435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.222445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.222829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.222838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.223217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.223226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.223607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.223616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.224006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.224016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.224469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.224479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.224861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.224871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.225263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.225273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.225661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.225671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.226071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.226081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.226501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.226512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.226913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.226923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.227154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.227164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.227361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.227374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.227795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.227805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.228228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.228239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.228623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.228633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.228934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.228943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.229362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.229371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.229677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.229686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.230092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.230102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.230513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.230523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.230946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.230955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.231383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.231392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.231769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.231780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.232076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.232085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-07-15 20:24:38.232529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-07-15 20:24:38.232539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.232918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.232927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.233324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.233360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.233603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.233616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.234043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.234053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.234455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.234466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.234843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.234853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.235261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.235270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.235691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.235701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.236089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.236098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.236408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.236418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.236840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.236850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.237277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.237288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.237603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.237613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.238032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.238042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.238436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.238447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.238862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.238872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.239248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.239259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.239670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.239679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.240063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.240072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.240397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.240408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.240892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.240902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.241305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.241316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.241732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.241742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.242162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.242171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.242587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.242596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.243027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.243036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.243416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.243426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.243807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.243816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.244195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.244205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.244575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.244585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.244891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.244900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.245286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.245296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.245699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.245708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.246150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.246160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.246557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.246566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.246986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.246995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.247388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.247398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.247786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-07-15 20:24:38.247795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-07-15 20:24:38.248213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.248224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.248646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.248656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.249042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.249052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.249456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.249466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.249889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.249898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.250277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.250287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.250762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.250772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.251149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.251158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.251570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.251580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.251954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.251965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.252369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.252378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.252760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.252769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.253149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.253159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.253541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.253550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.254001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.254010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.254394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.254404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.254788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.254797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.255215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.255225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.255618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.255627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.256028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.256039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.256456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.256466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.256864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.256874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.257273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.257282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.257655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.257664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.258057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.258066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.258518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.258528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.258904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.258913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.259334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.259348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.259643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.259653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.260076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.260085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.260506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.260516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.260894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.260903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.261286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.261296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.261688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.261698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.262089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.262099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.262569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-07-15 20:24:38.262579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-07-15 20:24:38.262971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.262980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.263376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.263413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.263850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.263862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.264353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.264390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.264824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.264836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.265314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.265351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.265765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.265776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.266154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.266164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.266569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.266579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.266959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.266968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.267347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.267357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.267743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.267753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.268130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.268141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.268568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.268578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.269002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.269012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.269448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.269459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.269873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.269883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.270396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.270432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.270729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.270746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.271169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.271180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.271463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.271474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.271802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.271811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.272203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.272213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.272612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.272621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.273000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.273010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.273473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.273483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.273857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.273866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.274246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.274256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.274670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.274680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.275102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.275111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.275518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.275529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.275823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.275833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.276244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-07-15 20:24:38.276254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-07-15 20:24:38.276636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-07-15 20:24:38.276645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:41.158 [2024-07-15 20:24:38.276938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.158 [2024-07-15 20:24:38.276950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.158 qpair failed and we were unable to recover it. 00:29:41.158 [2024-07-15 20:24:38.277387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.158 [2024-07-15 20:24:38.277397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.158 qpair failed and we were unable to recover it. 00:29:41.158 [2024-07-15 20:24:38.277798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.158 [2024-07-15 20:24:38.277808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.158 qpair failed and we were unable to recover it. 00:29:41.158 [2024-07-15 20:24:38.278224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.158 [2024-07-15 20:24:38.278234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.158 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.278681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.278690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.279104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.279113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.279508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.279517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.279903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.279913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.280435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.280472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.280905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.280917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.281417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.281454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.281888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.281900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.282430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.282468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.282889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.282902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.283115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.283136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.283573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.283583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.283984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.283994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.284513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.284550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.284979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.284991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.285501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.285537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.285978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.285990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.286508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.286544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.286940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.286952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.287464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.287500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.287910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.287921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.288431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.288468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.288896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.288907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.289388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.289425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.289868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.289880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.290404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.290441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.290881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.290893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.291346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.291382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.291716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.291728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.292141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.292152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.292536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.292546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.292945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.292954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.293355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.293365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.293746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.293756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.294138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.294148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.294588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.294597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.294977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.294986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.295388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.295397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.295816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.295826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.159 qpair failed and we were unable to recover it. 00:29:41.159 [2024-07-15 20:24:38.296254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.159 [2024-07-15 20:24:38.296265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.296659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.296669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.297046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.297055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.297515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.297525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.297919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.297928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.298529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.298565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.299010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.299022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.299471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.299481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.299695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.299707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.300116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.300135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.300555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.300565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.300944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.300953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.301489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.301526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.301970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.301982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.302497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.302533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.302937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.302948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.303506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.303542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.303982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.303994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.304490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.304526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.304968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.304980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.305543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.305579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.305956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.305967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.306409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.306445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.306896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.306907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.307393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.307430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.307867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.307879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.308381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.308417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.308923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.308935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.309431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.309468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.309720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.309734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.310163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.310174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.310578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.310588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.311018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.311027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.311487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.311497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.311910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.311920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.312337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.312346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-07-15 20:24:38.312725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-07-15 20:24:38.312740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.313146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.313156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.313568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.313578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.313865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.313874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.314295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.314305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.314701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.314711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.315187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.315197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.315505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.315516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.315995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.316004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.316305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.316314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.316722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.316732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.317118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.317138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.317535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.317544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.317846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.317855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.318135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.318145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.318530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.318539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.318951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.318961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.319280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.319289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.319687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.319696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.320115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.320128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.320575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.320584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.320994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.321003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.321418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.321427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.321685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.321695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.322111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.322121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.322555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.322565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.322863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.322873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.323283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.323297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.323608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.323617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.324016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.324025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.324431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.324441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.324854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.324863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.325304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.325314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.325734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.325744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.326199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.326208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.326606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.326615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.327038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.327048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.327330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.327340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.327648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.327658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.328049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.328058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.328386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.328396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.328798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-07-15 20:24:38.328808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-07-15 20:24:38.329128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.329138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.329394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.329403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.329803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.329812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.330197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.330207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.330606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.330615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.330917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.330926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.331351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.331361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.331803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.331812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.332192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.332202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.332627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.332637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.332949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.332959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.333231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.333241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.333538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.333548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.333956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.333965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.334373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.334383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.334772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.334782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.335197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.335206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.335606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.335615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.336064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.336074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.336508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.336518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.336922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.336931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.337347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.337357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.337814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.337823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.338346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.338382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.338774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.338786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.339193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.339204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.339593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.339603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.339976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.339985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.340401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.340411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.340793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.340802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.341225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.341235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.341641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.341650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.342082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.342091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.342513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.342523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.342912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.342922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.343421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.343458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.343847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.343859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.344238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-07-15 20:24:38.344249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-07-15 20:24:38.344632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.344642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.344994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.345004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.345279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.345291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.345593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.345603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.345995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.346004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.346397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.346406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.346777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.346786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.347166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.347177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.347597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.347606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.347983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.347992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.348394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.348403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.348778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.348787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.349171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.349182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.349561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.349571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.349980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.349989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.350366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.350378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.350801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.350810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.351180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.351198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.351572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.351582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.351981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.351990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.352308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.352318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.352709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.352718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.353096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.353106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.353564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.353575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.353999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.354009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.354311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.354321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.354697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.354707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.354994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.355004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.355318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.355328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.355617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.355626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.356033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.356042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.356461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.356470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.356892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-07-15 20:24:38.356902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-07-15 20:24:38.357246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.357256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.357631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.357641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.358046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.358055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.358460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.358470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.358870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.358880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.359286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.359296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.359721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.359732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.360121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.360135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.360534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.360543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.360919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.360930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.361442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.361479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.361767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.361779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.362182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.362192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.362601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.362610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.363017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.363026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.363472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.363482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.363858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.363867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.364291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.364301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.364694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.364704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.365137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.365149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.365551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.365561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.365966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.365976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.366458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.366495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.366860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.366872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.367391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.367427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.367861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.367872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.368344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.368381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.368819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.368831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.369213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.369223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.369452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.369465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.369780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.369791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.370099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.370109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.370513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.370524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.370925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.370934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.371325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.371334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.371760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.371770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.372184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.372193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.372593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.372603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.372811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.372822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.373214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.373225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.373537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.164 [2024-07-15 20:24:38.373547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.164 qpair failed and we were unable to recover it. 00:29:41.164 [2024-07-15 20:24:38.373952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.373961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.374343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.374353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.374778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.374788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.375201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.375211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.375597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.375606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.375981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.375990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.376398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.376408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.376812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.376822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.377214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.377224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.377626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.377635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.378017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.378026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.378509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.378519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.378864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.378873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.379279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.379289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.379717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.379726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.380017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.380026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.380426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.380435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.380819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.380829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.381242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.381252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.381656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.381665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.382047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.382056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.382455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.382465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.382881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.382890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.383289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.383300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.383722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.383731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.384151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.384161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.384546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.384555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.384840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.384848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.385256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.385266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.385670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.385680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.386096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.386105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.386566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.386576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.386977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.386987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.387395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.387432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.387861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.387872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.388399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.388435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.388872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.388888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.389265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.389276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.389656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.389666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.390042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.165 [2024-07-15 20:24:38.390052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.165 qpair failed and we were unable to recover it. 00:29:41.165 [2024-07-15 20:24:38.390434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.390444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.390820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.390830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.391248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.391259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.391567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.391577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.391991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.392002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.392408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.392418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.392873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.392882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.393255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.393264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.393628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.393638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.394032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.394042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.394450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.394459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.394822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.394832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.395226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.395236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.395654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.395665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.396081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.396092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.396498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.396508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.396906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.396916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.397316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.397326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.397537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.397551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.397960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.397970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.398387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.398397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.398810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.398820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.399246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.399255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.399546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.399558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.399969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.399978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.400364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.400374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.400777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.400786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.401186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.401196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.401568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.401577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.401945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.401954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.402333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.402343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.402741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.402750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.403154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.403164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.403561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.403572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.403985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.403996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.404397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.404408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.404703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.166 [2024-07-15 20:24:38.404714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.166 qpair failed and we were unable to recover it. 00:29:41.166 [2024-07-15 20:24:38.405115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.405134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.405538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.405547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.405964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.405973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.406358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.406367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.406768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.406777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.407208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.407217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.407598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.407607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.408006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.408015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.408414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.408424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.408911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.408920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.409299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.409309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.409672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.409680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.410111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.410121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.410544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.410556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.410990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.411001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.411514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.411550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.411980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.411992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.412507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.412544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.413019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.413031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.413534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.413571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.413901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.413912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.414400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.414436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.414878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.414889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.415378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.415415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.415852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.415864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.416375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.416412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.416722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.416734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.417110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.417120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.417551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.417561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.417943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.417952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.418436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.418472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.418775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.167 [2024-07-15 20:24:38.418788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.167 qpair failed and we were unable to recover it. 00:29:41.167 [2024-07-15 20:24:38.419109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.419119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.419524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.419534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.419913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.419922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.420420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.420457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.420896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.420908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.421408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.421445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.421880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.421892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.422295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.422333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.422726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.422738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.423133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.423144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.423548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.423558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.423971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.423981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.424469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.424506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.424934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.424945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.425452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.425488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.425917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.425928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.426425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.426461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.426715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.426728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.427130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.427142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.427537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.427546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.427948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.427958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.428455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.428492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.428926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.428937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.429336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.429373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.429705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.429717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.430118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.430135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.430530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.430541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.430943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.430952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.431454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.431490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.431917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.431929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.432437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.432474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.432828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.432840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.433050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.433066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.433477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.433487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.433903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.433913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.434401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.434437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.434792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.434804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.435201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-07-15 20:24:38.435213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-07-15 20:24:38.435661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.435671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.436061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.436071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.436381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.436391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.436797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.436806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.437220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.437230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.437567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.437576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.437959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.437968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.438390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.438400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.438800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.438809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.439215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.439226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.439626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.439636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.439929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.439940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.440347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.440357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.440649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.440659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.441076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.441086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.441491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.441501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.441869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.441879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.442184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.442195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.442693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.442703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.443082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.443091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.443486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.443496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.443871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.443880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.444264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.444275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.444697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.444707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.445056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.445065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.445539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.445548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.445945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.445954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.446465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.446501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.446906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.446917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.447431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.447466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.447715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.447729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.448138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.448149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.448557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.448566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.448991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.449001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.449394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.449405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.449788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.449798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.450197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.450207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.450599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-07-15 20:24:38.450610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-07-15 20:24:38.451008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.451022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.451430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.451440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.451855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.451864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.452177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.452188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.452548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.452558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.452985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.452994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.453391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.453401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.453824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.453834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.454233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.454243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.454524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.454535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.454929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.454939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.455160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.455181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.455568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.455578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.456000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.456010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.456432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.456443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.456848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.456858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.457155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.457166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.457550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.457560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.457957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.457967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.458369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.458379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.458827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.458838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.459269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.459279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.459720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.459730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.460119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.460136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.460549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.460559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.460980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.460990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.461483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.461520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.461931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.461943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.462347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.462384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.462808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.462820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.463318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.463354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.463764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.463777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.464057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.464067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.464478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.464489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.464873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.464882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.465269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.465279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.465680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.465689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.466098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.466107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.466487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-07-15 20:24:38.466497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-07-15 20:24:38.466879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.466889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.467416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.467454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.467877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.467891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.468378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.468414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.468846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.468857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.469236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.469247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.469642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.469651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.470062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.470071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.470476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.470486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.470862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.470871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.471070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.471079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.471485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.471496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.471874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.471883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.472139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.472151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.472539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.472548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.472925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.472934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.473297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.473307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.473706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.473715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.474137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.474147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.474546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.474555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.474927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.474937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.475065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.475077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.475458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.475468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.475881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.475891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.476298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.476308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.476683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.476692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.477101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.477111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.477516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.477526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.477961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.477970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.478462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.478502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.478927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.478939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.479448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.479484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.479930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.479941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.480164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.480179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.480579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.480589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.480957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.480966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.481342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.481352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-07-15 20:24:38.481781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-07-15 20:24:38.481790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.482169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.482179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.482555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.482565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.482868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.482878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.483271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.483280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.483700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.483709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.484085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.484094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.484562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.484572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.484954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.484963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.485459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.485495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.485926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.485937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.486434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.486471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.486883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.486895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.487381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.487417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.487838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.487851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.488226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.488237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.488620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.488630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.489054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.489063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.489360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.489370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.489793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.489806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.490199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.490209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.490513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.490522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.490993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.491002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.491394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.491403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.491787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.491797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.492178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.492187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.492563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.492572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.492931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.492940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.493224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.493233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.493529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.493540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.493939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.493948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.494248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.494258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.494629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.494639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.495060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-07-15 20:24:38.495070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-07-15 20:24:38.495468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.495478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.495892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.495901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.496328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.496338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.496726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.496735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.497154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.497164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.497585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.497594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.497995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.498004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.498424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.498433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.498812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.498821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.499223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.499233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.499541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.499550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.499932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.499941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.500230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.500242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.500653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.500662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.501036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.501046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.501450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.501460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.501877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.501886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.502301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.502311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.502693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.502701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.503082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.503091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.503536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.503546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.503923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.503931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.504430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.504467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.504893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.504905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.505384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.505421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.505837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.505849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.506242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.506253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.506635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.506645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.507065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.507074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.507466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.507476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.507858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.507867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.508297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.508307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.508601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.508611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.509069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.509079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.509469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.509479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.509903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.509913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.510296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.510306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.510699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.510708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.511000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.511009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.511292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.511302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-07-15 20:24:38.511735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-07-15 20:24:38.511744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.512149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.512159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.512554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.512563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.512867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.512876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.513258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.513267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.513663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.513672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.514047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.514056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.514512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.514522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.514905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.514914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.515226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.515236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.515632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.515641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.516017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.516027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.516430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.516440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.516855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.516865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.517156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.517166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.517585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.517595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.517990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.517999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.518393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.518403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.518785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.518794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.519174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.519183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.519586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.519595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.519977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.519986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.520404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.520414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.520801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.520810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.521188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.521198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.521625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.521634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.522034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.522044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.522445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.522454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.522871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.522880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.523096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.523110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.523589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.523599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.523991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.524000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.524502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.524539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.524977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.524989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.525498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.525535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.525965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.525976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.526488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.526524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.526964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.526976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.527465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.527502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.527932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.527944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.528417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.528457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-07-15 20:24:38.528867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-07-15 20:24:38.528880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.529372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.529408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.529836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.529848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.530361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.530397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.530731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.530743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.531149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.531159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.531575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.531585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.531961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.531971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.532369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.532379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.532669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.532680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.533098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.533107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.533523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.533533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.533950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.533959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.534472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.534508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.534949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.534961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.535441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.535478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.535870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.535882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.536323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.536360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.536779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.536791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.537192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.537202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.537472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.537481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.537885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.537894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.538275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.538285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.538776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.538786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.539053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.539070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.539383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.539393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.539779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.539792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.540167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.540176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.540583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.540593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.540996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.541006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1177857 Killed "${NVMF_APP[@]}" "$@" 00:29:41.175 [2024-07-15 20:24:38.541413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.541424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-07-15 20:24:38.541819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.541828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 20:24:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:41.175 20:24:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:41.175 [2024-07-15 20:24:38.542242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-07-15 20:24:38.542252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 20:24:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:41.176 [2024-07-15 20:24:38.542654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.542663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 20:24:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:41.176 20:24:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:41.176 [2024-07-15 20:24:38.543046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.543056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.543480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.543490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.543910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.543920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.544341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.544353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.544564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.544574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.544994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.545004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.545425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.545435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.545834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.545843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.546224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.546234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.546614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.546624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.547003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.547013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.547315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.547326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.547704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.547715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.548117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.548133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.548517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.548528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.548932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.548942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.549340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.549350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.549750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.549760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.550169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.550181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 20:24:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1178879 00:29:41.176 20:24:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1178879 00:29:41.176 [2024-07-15 20:24:38.550583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.550595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 20:24:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1178879 ']' 00:29:41.176 20:24:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:41.176 [2024-07-15 20:24:38.551015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.551026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 20:24:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.176 20:24:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:41.176 [2024-07-15 20:24:38.551430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.551442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 20:24:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.176 [2024-07-15 20:24:38.551845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.551856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 20:24:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:41.176 20:24:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:41.176 [2024-07-15 20:24:38.552259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.552271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.552671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.552684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.553070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.553080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.553498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.553508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.553807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.553817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.554221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.554230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.554619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.554628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.555006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.555016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.555415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.555425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-07-15 20:24:38.555809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-07-15 20:24:38.555818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.556202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.556212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.556664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.556674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.557094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.557104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.557430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.557440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.557839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.557848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.558218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.558228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.558544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.558553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.559020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.559029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.559430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.559440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.559857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.559866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.560279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.560288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.560721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.560730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.561043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.561052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.561457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.561466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.561849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.561859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.562280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.562290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.562690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.562700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.563104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.563114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.563544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.563554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.563973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.563983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.564470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.564506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.564946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.564958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.565460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.565496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.565750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.565763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.566073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.566084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.566545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.566555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.566984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.566993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.567610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.567647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.568056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.568067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.568558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.568594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.569007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.569019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.569425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.569436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.569828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.569838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-07-15 20:24:38.570336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-07-15 20:24:38.570373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.570798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.570809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.571192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.571202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.571483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.571493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.571902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.571912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.572458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.572468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.572927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.572936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.573432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.573469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.573761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.573775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.574182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.574193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.574590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.574600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.575070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.575079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.575493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.575502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.575919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.575929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.576162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.576176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.576596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.576606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.577013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.577022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.577285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.577295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.577681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.577690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.578138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.578148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.578576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.578585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.578989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.578999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.579339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.579348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-07-15 20:24:38.579722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-07-15 20:24:38.579731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.450 [2024-07-15 20:24:38.580057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.450 [2024-07-15 20:24:38.580067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.450 qpair failed and we were unable to recover it. 00:29:41.450 [2024-07-15 20:24:38.580473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.450 [2024-07-15 20:24:38.580483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.450 qpair failed and we were unable to recover it. 00:29:41.450 [2024-07-15 20:24:38.580909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.450 [2024-07-15 20:24:38.580919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.450 qpair failed and we were unable to recover it. 00:29:41.450 [2024-07-15 20:24:38.581420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.450 [2024-07-15 20:24:38.581430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.450 qpair failed and we were unable to recover it. 00:29:41.450 [2024-07-15 20:24:38.581810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.450 [2024-07-15 20:24:38.581821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.450 qpair failed and we were unable to recover it. 00:29:41.450 [2024-07-15 20:24:38.582316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.450 [2024-07-15 20:24:38.582353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.450 qpair failed and we were unable to recover it. 00:29:41.450 [2024-07-15 20:24:38.582756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.450 [2024-07-15 20:24:38.582768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.450 qpair failed and we were unable to recover it. 00:29:41.450 [2024-07-15 20:24:38.583172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.450 [2024-07-15 20:24:38.583184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.450 qpair failed and we were unable to recover it. 00:29:41.450 [2024-07-15 20:24:38.583594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.450 [2024-07-15 20:24:38.583604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.450 qpair failed and we were unable to recover it. 00:29:41.450 [2024-07-15 20:24:38.583993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.450 [2024-07-15 20:24:38.584003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.450 qpair failed and we were unable to recover it. 00:29:41.450 [2024-07-15 20:24:38.584455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.450 [2024-07-15 20:24:38.584465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.450 qpair failed and we were unable to recover it. 00:29:41.450 [2024-07-15 20:24:38.584852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.450 [2024-07-15 20:24:38.584863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.450 qpair failed and we were unable to recover it. 00:29:41.450 [2024-07-15 20:24:38.585291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.450 [2024-07-15 20:24:38.585301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.450 qpair failed and we were unable to recover it. 00:29:41.450 [2024-07-15 20:24:38.585686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.450 [2024-07-15 20:24:38.585695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.450 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.586042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.586051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.586346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.586357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.586796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.586806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.587203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.587217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.587597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.587606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.588005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.588014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.588417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.588427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.588810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.588821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.589251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.589262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.589538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.589550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.589948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.589958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.590256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.590266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.590663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.590672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.591063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.591073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.591485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.591494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.591879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.591888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.592273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.592283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.592488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.592500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.592922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.592932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.593337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.593346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.593731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.593740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.594140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.594150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.594422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.594431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.594813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.594822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.595259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.595268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.595580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.595590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.595975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.595984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.596402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.596412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.596840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.596850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.597265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.597275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.597701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.597711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.598120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.598137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.598610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.598619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.599009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.599018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.599455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.599465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.599844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.599853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.600353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.600390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.451 [2024-07-15 20:24:38.600829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.451 [2024-07-15 20:24:38.600841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.451 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.601138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.601148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.601389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.601403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.601607] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:29:41.452 [2024-07-15 20:24:38.601657] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.452 [2024-07-15 20:24:38.601821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.601832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.602157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.602167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.602550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.602560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.602974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.602984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.603416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.603426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.603729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.603740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.604151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.604162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.604554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.604564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.604992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.605002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.605428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.605439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.605843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.605853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.606263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.606273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.606586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.606595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.607021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.607031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.607461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.607472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.607696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.607708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.608010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.608023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.608439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.608450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.608859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.608870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.609272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.609283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.609693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.609703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.610097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.610108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.610500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.610511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.610807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.610817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.611201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.611212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.611611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.611621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.612032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.612041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.612459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.612470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.612899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.612909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.613328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.613339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.613742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.613752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.614149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.614160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.614559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.452 [2024-07-15 20:24:38.614569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.452 qpair failed and we were unable to recover it. 00:29:41.452 [2024-07-15 20:24:38.614979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.614990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.615271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.615282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.615694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.615705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.616140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.616151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.616466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.616476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.616789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.616799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.617211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.617221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.617695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.617704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.617984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.617994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.618412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.618422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.618804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.618813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.619556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.619574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.619956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.619966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.620393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.620405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.620828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.620838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.621350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.621386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.621805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.621818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.622242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.622252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.622677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.622688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.623081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.623091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.623549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.623559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.623940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.623950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.624426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.624464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.624946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.624959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.625482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.625520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.625944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.625958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.626367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.626403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.626659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.626671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.627016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.627026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.627436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.627447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.627842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.627851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.628238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.628249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.628636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.628646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.628884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.628893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.629377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.629388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.629688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.629698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.630156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.630166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.630649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.453 [2024-07-15 20:24:38.630660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.453 qpair failed and we were unable to recover it. 00:29:41.453 [2024-07-15 20:24:38.631048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.631058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.631528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.631538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.631833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.631844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.632250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.632260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.632539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.632548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.632971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.632981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.633295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.633305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.633700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.633709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.454 [2024-07-15 20:24:38.634089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.634099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.634520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.634530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.634914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.634923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.635327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.635337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.635652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.635662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.635990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.635999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.636389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.636399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.636849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.636858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.637263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.637274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.637672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.637681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.637981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.637991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.638260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.638270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.638671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.638680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.639065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.639075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.639481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.639491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.639785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.639795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.639991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.640000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.640419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.640429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.640870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.640880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.641413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.641449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.641899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.641910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.642460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.642496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.642820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.642832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.643257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.643268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.643688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.643698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.644144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.644154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.644558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.644568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.645003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.645013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.454 [2024-07-15 20:24:38.645360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.454 [2024-07-15 20:24:38.645370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.454 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.645608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.645618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.646023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.646033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.646426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.646436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.646815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.646829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.647214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.647224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.647631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.647640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.648058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.648067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.648380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.648390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.648824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.648833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.649214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.649223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.649660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.649669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.650056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.650065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.650277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.650286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.650671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.650681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.651062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.651071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.651570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.651580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.651964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.651973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.652372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.652382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.652781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.652791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.653027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.653037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.653331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.653341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.653640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.653650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.654027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.654037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.654400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.654410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.654800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.654810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.655130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.655140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.655533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.655543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.455 qpair failed and we were unable to recover it. 00:29:41.455 [2024-07-15 20:24:38.655753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.455 [2024-07-15 20:24:38.655764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.656193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.656203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.656683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.656692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.657003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.657016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.657281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.657291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.657728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.657737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.658137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.658147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.658528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.658537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.658951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.658960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.659224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.659234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.659669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.659679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.660099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.660110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.660520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.660531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.660821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.660831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.661223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.661233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.661630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.661640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.661892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.661901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.662346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.662356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.662652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.662662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.662940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.662950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.663380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.663390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.663778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.663787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.664168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.664177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.664412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.664421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.664832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.664841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.665240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.665250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.665535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.665545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.665938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.665947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.666357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.666366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.666760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.666769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.667180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.667193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.667623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.667633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.668028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.668037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.668436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.668446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.668663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.668672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.669037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.669046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.456 [2024-07-15 20:24:38.669504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.456 [2024-07-15 20:24:38.669514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.456 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.669734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.669748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.670156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.670166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.670563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.670572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.670977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.670987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.671388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.671398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.671799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.671809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.672193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.672202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.672400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.672412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.672834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.672844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.673232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.673242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.673649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.673659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.674043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.674052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.674450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.674459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.674727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.674737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.675154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.675164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.675454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.675464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.675850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.675859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.676239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.676249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.676651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.676660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.677073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.677083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.677489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.677499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.677880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.677890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.678285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.678295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.678681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.678690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.679077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.679086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.679471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.679481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.679882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.679892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.680296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.680305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.680764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.680773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.681153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.681163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.681626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.681635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.682016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.682025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.682225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.682234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.682590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.682599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.682923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.682933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.683335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.683346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.683728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.457 [2024-07-15 20:24:38.683737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.457 qpair failed and we were unable to recover it. 00:29:41.457 [2024-07-15 20:24:38.684195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.684205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.684590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.684600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.684981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.684991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.685288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.685298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.685537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.685546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.685954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.685963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.686349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.686358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.686787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.686797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.687177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.687187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.687504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.687514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.687926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.687935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.688337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.688347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.688581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.688591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.688979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.688989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.689384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.689393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.689651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.689661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.690061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.690070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.690447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.690457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.690960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.690969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.691365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.691375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.691675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.691685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.691999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:41.458 [2024-07-15 20:24:38.692095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.692104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.692355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.692365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.692804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.692813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.693023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.693034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.693427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.693437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.693829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.693838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.694334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.694344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.694741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.694750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.695147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.695156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.695450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.695460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.695861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.695870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.696263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.696273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.696705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.696714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.696970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.696980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.697451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.697461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.697870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.697878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.698278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.458 [2024-07-15 20:24:38.698289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.458 qpair failed and we were unable to recover it. 00:29:41.458 [2024-07-15 20:24:38.698699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.698709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.698916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.698925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.699244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.699254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.699697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.699706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.700018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.700028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.700292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.700302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.700731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.700740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.701063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.701072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.701449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.701459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.701899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.701909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.702220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.702236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.702687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.702697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.702898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.702907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.703313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.703323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.703714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.703724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.704116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.704131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.704421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.704431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.704813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.704822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.705126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.705136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.705424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.705434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.705838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.705847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.706180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.706190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.706612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.706621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.706999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.707008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.707383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.707393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.707784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.707794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.708198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.708208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.708562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.708572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.708942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.708951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.709370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.709380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.709760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.709769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.710139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.710149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.710567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.710576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.710997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.711006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.711408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.711417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.711805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.711815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.712126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.712137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.459 [2024-07-15 20:24:38.712369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.459 [2024-07-15 20:24:38.712380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.459 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.712790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.712800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.713201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.713210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.713620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.713632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.714032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.714041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.714543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.714553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.714932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.714941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.715152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.715162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.715590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.715600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.715722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.715731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.716041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.716050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.716376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.716386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.716801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.716810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.717263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.717273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.717639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.717648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.717960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.717970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.718201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.718213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.718616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.718626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.719006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.719016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.719422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.719432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.719815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.719825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.720261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.720272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.720716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.720726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.721130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.721141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.721545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.721555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.721935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.721945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.722346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.722383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.722799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.722811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.723201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.723211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.460 [2024-07-15 20:24:38.723596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.460 [2024-07-15 20:24:38.723606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.460 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.723946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.723960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.724362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.724372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.724796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.724806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.725211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.725221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.725605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.725614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.725886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.725895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.726299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.726309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.726602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.726611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.727024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.727034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.727441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.727451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.727875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.727884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.728187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.728198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.728602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.728611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.729018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.729028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.729322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.729339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.729740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.729749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.729942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.729951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.730373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.730383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.730798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.730808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.731209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.731219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.731619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.731628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.732046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.732055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.732455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.732464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.732847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.732857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.733236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.733246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.733534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.733543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.733968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.733977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.734319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.734331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.734737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.734747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.734981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.734996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.735390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.735400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.735837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.735846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.736321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.736358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.736796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.736808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.737192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.737202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.737624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.737633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.738058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.738067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.738398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.461 [2024-07-15 20:24:38.738408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.461 qpair failed and we were unable to recover it. 00:29:41.461 [2024-07-15 20:24:38.738826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.738836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.739241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.739251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.739656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.739666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.739969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.739981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.740291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.740301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.740630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.740640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.740844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.740857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.741239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.741249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.741561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.741570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.741949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.741958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.742347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.742357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.742761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.742771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.743163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.743173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.743577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.743587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.744021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.744031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.744433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.744443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.744713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.744724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.745207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.745217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.745549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.745558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.745760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.745769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.746216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.746226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.746617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.746626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.747003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.747012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.747426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.747436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.747834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.747843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.748240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.748250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.748678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.748688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.749091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.749101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.749414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.749423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.749807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.749815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.750189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.750199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.750605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.750614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.750818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.750828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.751194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.751204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.751587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.751597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.751999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.752009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.752430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.752440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.752905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.462 [2024-07-15 20:24:38.752914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.462 qpair failed and we were unable to recover it. 00:29:41.462 [2024-07-15 20:24:38.753293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.753303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.753700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.753709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.754138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.754148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.754530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.754539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.754946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.754955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.755182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.755195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.755606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.755616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.755903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.755913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.756323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.756333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.756731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.756740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.757030] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.463 [2024-07-15 20:24:38.757060] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.463 [2024-07-15 20:24:38.757067] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.463 [2024-07-15 20:24:38.757074] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.463 [2024-07-15 20:24:38.757079] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.463 [2024-07-15 20:24:38.757160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.757169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.757251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:41.463 [2024-07-15 20:24:38.757572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.757583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.757524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:41.463 [2024-07-15 20:24:38.757675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:41.463 [2024-07-15 20:24:38.757676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:41.463 [2024-07-15 20:24:38.757950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.757959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.758172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.758183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.758605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.758615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.758995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.759005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.759431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.759441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.759717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.759726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.760162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.760172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.760556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.760565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.760865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.760874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.761145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.761154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.761610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.761619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.762003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.762012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.762329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.762340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.762667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.762677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.763074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.763084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.763487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.763497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.763881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.763890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.764272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.764281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.764752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.764762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.765212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.765222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.765663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.765672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.766103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.766113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.766524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.766534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.766918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.463 [2024-07-15 20:24:38.766928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.463 qpair failed and we were unable to recover it. 00:29:41.463 [2024-07-15 20:24:38.767336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.767347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.767552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.767562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.767981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.767990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.768376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.768386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.768826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.768835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.769345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.769384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.769822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.769835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.770278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.770289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.770671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.770681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.770773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.770782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.771040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.771049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.771474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.771484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.771871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.771880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.772283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.772293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.772591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.772600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.773033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.773042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.773438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.773448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.773834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.773844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.774250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.774259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.774663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.774673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.775080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.775090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.775484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.775495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.775890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.775900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.776221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.776231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.776545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.776554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.776818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.776827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.777216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.777226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.777565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.777574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.777992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.778001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.778203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.464 [2024-07-15 20:24:38.778213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.464 qpair failed and we were unable to recover it. 00:29:41.464 [2024-07-15 20:24:38.778709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.778718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.779105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.779114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.779384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.779394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.779778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.779788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.780192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.780205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.780618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.780628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.781024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.781034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.781433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.781444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.781860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.781869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.782087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.782101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.782516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.782527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.782909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.782918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.783378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.783387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.783807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.783816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.784341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.784378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.784830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.784843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.785256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.785267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.785695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.785705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.785998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.786008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.786431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.786441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.786564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.786573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.786929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.786939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.787247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.787257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.787670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.787679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.788064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.788073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.788522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.788532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.788925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.788934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.789199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.789209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.789617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.789626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.790013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.790023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.790457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.790467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.790867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.790880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.791286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.791295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.791731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.791740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.792139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.792149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.792565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.792575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.792956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.792965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.793356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.793365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.465 qpair failed and we were unable to recover it. 00:29:41.465 [2024-07-15 20:24:38.793802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.465 [2024-07-15 20:24:38.793811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.794082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.794091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.794484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.794494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.794905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.794915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.795328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.795365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.795807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.795819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.796204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.796214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.796534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.796544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.796833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.796843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.797224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.797234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.797650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.797659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.798049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.798058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.798451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.798461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.798872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.798881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.799272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.799282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.799524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.799533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.799948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.799957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.800251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.800262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.800671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.800680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.801148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.801157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.801596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.801605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.801993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.802002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.802383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.802393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.802810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.802820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.803259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.803269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.803660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.803670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.803960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.803969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.804452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.804462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.804868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.804878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.805138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.805149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.805426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.805437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.805871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.805880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.806392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.806428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.806868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.806880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.807095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.807106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.807414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.807425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.807599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.807608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.807811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.807835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.466 [2024-07-15 20:24:38.808239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.466 [2024-07-15 20:24:38.808249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.466 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.808652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.808661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.809100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.809109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.809424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.809434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.809776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.809786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.810081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.810092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.810521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.810531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.810914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.810923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.811274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.811284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.811736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.811745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.812135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.812145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.812446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.812455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.812848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.812857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.813270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.813280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.813701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.813710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.814092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.814101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.814568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.814577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.814958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.814967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.815481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.815518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.816003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.816014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.816322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.816332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.816740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.816750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.817197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.817207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.817638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.817654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.818034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.818043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.818456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.818465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.818861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.818871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.819191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.819201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.819608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.819617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.819998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.820007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.820402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.820412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.820791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.820800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.821104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.821115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.821539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.821549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.822009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.822018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.822398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.822408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.822716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.822726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.822961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.822970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.467 qpair failed and we were unable to recover it. 00:29:41.467 [2024-07-15 20:24:38.823200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.467 [2024-07-15 20:24:38.823210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.823665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.823674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.824054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.824063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.824549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.824559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.824957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.824966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.825360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.825370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.825667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.825677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.826089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.826098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.826275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.826285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.826699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.826708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.827115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.827130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.827525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.827534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.827923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.827935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.828337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.828374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.828817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.828829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.829210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.829221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.829744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.829753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.830056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.830066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.830282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.830291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.830707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.830716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.831129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.831139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.831522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.831531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.831944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.831953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.832468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.832504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.832940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.832952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.833434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.833471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.833912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.833924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.834197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.834208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.834631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.834641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.834846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.834856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.835269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.835279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.835445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.835455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.835841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.835850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.836240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.836250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.836762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.836771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.837154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.837164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.837575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.837585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.837978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.468 [2024-07-15 20:24:38.837988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.468 qpair failed and we were unable to recover it. 00:29:41.468 [2024-07-15 20:24:38.838418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.838428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.838866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.838878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.839300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.839309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.839518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.839527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.839892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.839902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.840341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.840351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.840738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.840747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.841140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.841150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.841577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.841586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.842018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.842027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.842508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.842519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.842826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.842836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.843242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.843252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.843637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.843646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.843852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.843861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.844290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.844300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.844541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.844550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.844965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.844974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.845363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.845372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.845755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.845765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.845994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.846009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.846395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.846405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.846791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.846800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.847188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.847198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.847492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.847502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.847733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.847742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.848140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.848151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.848583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.848592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.848851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.848860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.849074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.849083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.849534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.849543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.849927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.469 [2024-07-15 20:24:38.849936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.469 qpair failed and we were unable to recover it. 00:29:41.469 [2024-07-15 20:24:38.850330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.850340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.850744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.850754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.851023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.851032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.851422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.851432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.851803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.851812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.852041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.852050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.852463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.852472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.852878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.852887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.853080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.853089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.853422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.853432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.853752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.853762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.854176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.854186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.854443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.854452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.854859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.854868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.855266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.855276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.855787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.855797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.856101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.856110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.856529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.856539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.856919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.856929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.857228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.857238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.857516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.857527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.857930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.857939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.858327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.858337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.858696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.858705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.858882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.858891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.859251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.859261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.859655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.859664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.860079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.860089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.860302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.860312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.860619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.860628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.861053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.861062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.861473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.861483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.861985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.861994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.862433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.862442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.862837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.862846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.863354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.863390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.863834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.863846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.470 [2024-07-15 20:24:38.864075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.470 [2024-07-15 20:24:38.864089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.470 qpair failed and we were unable to recover it. 00:29:41.471 [2024-07-15 20:24:38.864494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.471 [2024-07-15 20:24:38.864504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.471 qpair failed and we were unable to recover it. 00:29:41.471 [2024-07-15 20:24:38.864889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.471 [2024-07-15 20:24:38.864899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.471 qpair failed and we were unable to recover it. 00:29:41.471 [2024-07-15 20:24:38.865325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.471 [2024-07-15 20:24:38.865361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.471 qpair failed and we were unable to recover it. 00:29:41.471 [2024-07-15 20:24:38.865802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.471 [2024-07-15 20:24:38.865813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.471 qpair failed and we were unable to recover it. 00:29:41.471 [2024-07-15 20:24:38.866032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.471 [2024-07-15 20:24:38.866042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.471 qpair failed and we were unable to recover it. 00:29:41.471 [2024-07-15 20:24:38.866424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.471 [2024-07-15 20:24:38.866434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.471 qpair failed and we were unable to recover it. 00:29:41.471 [2024-07-15 20:24:38.866733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.471 [2024-07-15 20:24:38.866744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.471 qpair failed and we were unable to recover it. 00:29:41.471 [2024-07-15 20:24:38.866971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.471 [2024-07-15 20:24:38.866981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.471 qpair failed and we were unable to recover it. 00:29:41.471 [2024-07-15 20:24:38.867385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.471 [2024-07-15 20:24:38.867395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.471 qpair failed and we were unable to recover it. 00:29:41.471 [2024-07-15 20:24:38.867804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.471 [2024-07-15 20:24:38.867813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.471 qpair failed and we were unable to recover it. 00:29:41.471 [2024-07-15 20:24:38.868249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.471 [2024-07-15 20:24:38.868259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.471 qpair failed and we were unable to recover it. 00:29:41.471 [2024-07-15 20:24:38.868667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.471 [2024-07-15 20:24:38.868676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.471 qpair failed and we were unable to recover it. 00:29:41.471 [2024-07-15 20:24:38.869087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.471 [2024-07-15 20:24:38.869096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.471 qpair failed and we were unable to recover it. 00:29:41.471 [2024-07-15 20:24:38.869510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.471 [2024-07-15 20:24:38.869521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.471 qpair failed and we were unable to recover it. 00:29:41.471 [2024-07-15 20:24:38.869917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.471 [2024-07-15 20:24:38.869926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.471 qpair failed and we were unable to recover it. 00:29:41.471 [2024-07-15 20:24:38.870462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.471 [2024-07-15 20:24:38.870499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.471 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.870795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.870809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.871132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.871143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.871571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.871584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.871969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.871978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.872462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.872498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.872798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.872810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.873304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.873342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.873760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.873771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.874161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.874171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.874578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.874588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.874972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.874986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.875224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.875233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.875530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.875539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.875746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.875756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.876223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.876234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.876656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.876666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.877098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.877108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.877549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.877559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.877939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.877949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.878334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.878344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.878781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.878790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.879175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.879185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.879415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.879423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.743 qpair failed and we were unable to recover it. 00:29:41.743 [2024-07-15 20:24:38.879830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.743 [2024-07-15 20:24:38.879839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.880261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.880270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.880466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.880475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.880941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.880951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.881356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.881366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.881788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.881797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.882256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.882266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.882626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.882636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.883083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.883092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.883529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.883538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.883960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.883969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.884349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.884385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.884683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.884694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.885116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.885138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.885412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.885422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.885698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.885709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.885929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.885938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.886385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.886395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.886777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.886786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.887228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.887237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.887451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.887460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.887870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.887880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.888275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.888285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.888558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.888567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.888979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.888988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.889248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.889257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.889666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.889675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.889890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.889899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.890297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.890308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.890768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.890777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.891101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.891110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.891510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.891519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.891901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.891910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.892413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.892450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.892891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.892902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.893293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.893303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.893719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.893729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.894148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.894159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.894560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.894570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.894787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.894796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.744 [2024-07-15 20:24:38.895192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.744 [2024-07-15 20:24:38.895202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.744 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.895418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.895431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.895848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.895858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.896238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.896248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.896664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.896673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.896890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.896899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.897359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.897369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.897764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.897773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.898180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.898190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.898407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.898416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.898822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.898831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.899131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.899141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.899540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.899549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.899948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.899957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.900369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.900378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.900765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.900777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.901090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.901100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.901511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.901521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.901826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.901835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.902312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.902322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.902570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.902580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.902988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.902998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.903434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.903444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.903854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.903863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.904345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.904382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.904623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.904635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.905086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.905096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.905501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.905511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.905895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.905904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.906293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.906303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.906705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.906715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.906998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.907007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.907358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.907367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.907577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.907586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.908007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.908017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.908428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.908437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.908836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.908845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.909273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.909283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.909675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.909684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.910069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.910078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.910310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.910319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.745 [2024-07-15 20:24:38.910742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.745 [2024-07-15 20:24:38.910751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.745 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.911011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.911022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.911443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.911453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.911669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.911678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.912062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.912072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.912476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.912485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.912908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.912918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.913212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.913222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.913637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.913646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.913947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.913956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.914349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.914359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.914752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.914762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.914983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.914993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.915117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.915135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.915415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.915425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.915746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.915755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.916182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.916192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.916596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.916605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.916988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.916998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.917428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.917438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.917824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.917833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.918225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.918235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.918634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.918644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.919054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.919063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.919467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.919477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.919854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.919863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.920245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.920255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.920516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.920526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.920948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.920959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.921215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.921225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.921672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.921681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.922088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.922097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.922483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.922493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.922710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.922719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.923156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.923165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.923461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.923470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.923874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.923883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.924272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.924282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.924678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.924687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.924990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.924999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.925519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.925528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.925764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.746 [2024-07-15 20:24:38.925773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.746 qpair failed and we were unable to recover it. 00:29:41.746 [2024-07-15 20:24:38.926158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.926168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.926554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.926569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.926967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.926977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.927401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.927410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.927667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.927677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.928082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.928092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.928382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.928393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.928793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.928802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.929186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.929195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.929562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.929571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.929981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.929990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.930272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.930283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.930735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.930744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.931039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.931048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.931459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.931469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.931860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.931869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.932177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.932187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.932568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.932577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.932961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.932970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.933360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.933369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.933792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.933802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.934094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.934103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.934498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.934508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.934896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.934906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.935326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.935363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.935809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.935820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.936217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.936228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.936641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.936655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.937060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.937069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.937546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.937557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.937946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.937955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.938468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.938504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.747 [2024-07-15 20:24:38.938944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.747 [2024-07-15 20:24:38.938955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.747 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.939478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.939515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.939856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.939867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.940405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.940442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.940775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.940787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.941186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.941197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.941597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.941606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.941998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.942007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.942428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.942438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.942832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.942842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.943307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.943317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.943714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.943723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.944110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.944119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.944357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.944367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.944767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.944776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.945202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.945213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.945616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.945626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.945919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.945929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.946309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.946318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.946540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.946550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.947028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.947037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.947449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.947459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.947852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.947864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.948251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.948261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.948680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.948690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.948986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.948996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.949423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.949433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.949764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.949773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.950129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.950138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.950523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.950532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.950915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.950924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.951461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.951497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.951935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.951946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.952433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.952469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.952824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.952837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.953065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.953075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.953489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.953500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.953804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.953814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.954048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.954058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.954474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.954484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.954885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.954895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.748 [2024-07-15 20:24:38.955298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.748 [2024-07-15 20:24:38.955308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.748 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.955691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.955701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.956087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.956096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.956492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.956501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.956959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.956968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.957328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.957365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.957792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.957805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.958338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.958376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.958841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.958857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.959297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.959308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.959736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.959746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.960133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.960143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.960578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.960587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.960970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.960979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.961187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.961197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.961411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.961420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.961799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.961808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.962129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.962139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.962533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.962542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.962928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.962937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.963142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.963152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.963557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.963567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.963827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.963836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.964140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.964149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.964540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.964549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.964959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.964968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.965395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.965405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.965807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.965816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.966227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.966238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.966660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.966670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.966940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.966949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.967344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.967354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.967737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.967746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.968129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.968139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.968591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.968600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.968985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.968994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.969489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.969525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.969782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.969794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.970207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.970218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.970644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.970653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.971107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.971117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.971538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.749 [2024-07-15 20:24:38.971548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.749 qpair failed and we were unable to recover it. 00:29:41.749 [2024-07-15 20:24:38.971777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.971786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.972103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.972113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.972334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.972349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.972737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.972747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.973200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.973211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.973411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.973420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.973799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.973808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.974022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.974032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.974414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.974424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.974650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.974659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.975059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.975069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.975519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.975529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.975952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.975962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.976370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.976379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.976762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.976772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.977175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.977185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.977583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.977592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.978007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.978016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.978427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.978437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.978511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.978519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.978796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.978805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.979188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.979199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.979410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.979419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.979789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.979798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.980189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.980198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.980645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.980654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.980856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.980865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.981058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.981068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.981472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.981482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.981877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.981886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.982268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.982278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.982564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.982575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.982987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.982997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.983212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.983222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.983600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.983612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.984011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.984020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.984419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.984428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.984771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.984780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.985047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.985056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.985449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.985459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.985666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.750 [2024-07-15 20:24:38.985675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.750 qpair failed and we were unable to recover it. 00:29:41.750 [2024-07-15 20:24:38.986089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.986098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.986534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.986544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.986619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.986628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.986984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.986993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.987212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.987221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.987511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.987521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.987827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.987836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.988165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.988175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.988399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.988408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.988630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.988639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.989056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.989065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.989474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.989483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.989748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.989757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.990066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.990075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.990476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.990486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.990754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.990764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.991168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.991178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.991584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.991593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.991960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.991969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.992452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.992462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.992846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.992857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.993163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.993173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.993590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.993599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.993891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.993900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.994198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.994207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.994587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.994596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.994802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.994814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.995107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.995116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.995533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.995543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.995828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.995838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.996046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.996056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.996317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.996326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.996595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.996605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.997044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.997054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.997477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.997487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.997914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.997924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.998227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.998236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.998700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.998710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.999091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.999100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.999524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.751 [2024-07-15 20:24:38.999533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.751 qpair failed and we were unable to recover it. 00:29:41.751 [2024-07-15 20:24:38.999733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:38.999743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.000145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.000155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.000579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.000588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.000970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.000980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.001055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.001064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.001423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.001433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.001868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.001878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.002195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.002207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.002590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.002599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.002815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.002824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.003034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.003043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.003327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.003337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.003659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.003668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.004019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.004028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.004288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.004298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.004729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.004739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.005147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.005157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.005488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.005497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.005911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.005920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.006318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.006328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.006632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.006640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.006859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.006868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.007155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.007165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.007580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.007589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.008010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.008018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.008229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.008240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.008631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.008640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.009033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.009043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.009501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.009510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.009736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.009745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.010154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.010165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.010591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.010600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.010807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.010817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.011108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.011118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.011531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.011541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.752 [2024-07-15 20:24:39.011842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.752 [2024-07-15 20:24:39.011852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.752 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.012258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.012267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.012681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.012691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.013134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.013144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.013568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.013577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.013982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.013992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.014382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.014391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.014819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.014829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.015232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.015242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.015618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.015628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.015935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.015945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.016381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.016390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.016771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.016780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.017017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.017029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.017448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.017459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.017888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.017897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.018309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.018320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.018419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.018428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.018633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.018643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.019073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.019083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.019489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.019499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.019886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.019895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.020300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.020309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.020716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.020725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.021162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.021171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.021583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.021592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.021977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.021987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.022388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.022399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.022802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.022812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.023220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.023230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.023543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.023552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.023977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.023986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.024292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.024303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.024668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.024677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.024939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.024948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.025259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.025269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.025707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.025716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.026100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.026109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.026565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.026576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.026836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.026846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.027105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.027117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.753 [2024-07-15 20:24:39.027501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.753 [2024-07-15 20:24:39.027511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.753 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.027921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.027930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.028117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.028130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.028439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.028448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.028858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.028867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.029129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.029140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.029636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.029673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.030084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.030097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.030599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.030636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.030890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.030901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.031340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.031377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.031719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.031731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.032136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.032146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.032402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.032412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.032802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.032811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.033062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.033072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.033483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.033494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.033880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.033889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.034323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.034333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.034714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.034723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.035019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.035029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.035445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.035455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.035728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.035737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.035964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.035974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.036361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.036371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.036739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.036749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.037182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.037194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.037435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.037444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.037677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.037695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.037925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.037935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.038392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.038404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.038791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.038800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.039061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.039071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.039458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.039467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.039853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.039863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.040250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.040260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.040655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.040664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.040868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.040877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.041298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.041307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.041611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.041620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.042057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.042067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.042469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.754 [2024-07-15 20:24:39.042479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.754 qpair failed and we were unable to recover it. 00:29:41.754 [2024-07-15 20:24:39.042865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.042874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.043259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.043269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.043558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.043567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.043668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.043677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.044071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.044081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.044500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.044509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.044769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.044779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.045090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.045099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.045523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.045533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.045771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.045780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.045997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.046006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.046386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.046396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.046777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.046786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.046986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.046995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.047282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.047292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.047613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.047622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.048015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.048024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.048502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.048512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.048929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.048938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.049205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.049215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.049634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.049644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.050031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.050040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.050320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.050330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.050743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.050753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.051175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.051185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.051466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.051476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.051736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.051746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.051975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.051984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.052375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.052385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.052786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.052796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.053029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.053038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.053430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.053440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.053843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.053853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.054255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.054266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.054675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.054684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.055142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.055152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.055377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.055388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.055596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.055606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.056019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.056029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.056415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.056425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.056797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.056807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.755 [2024-07-15 20:24:39.057238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.755 [2024-07-15 20:24:39.057248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.755 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.057566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.057575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.058053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.058062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.058459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.058469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.058854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.058864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.059267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.059276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.059717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.059726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.060106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.060115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.060338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.060348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.060661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.060670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.061080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.061090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.061305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.061317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.061719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.061729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.061940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.061950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.062371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.062381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.062789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.062798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.063183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.063192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.063581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.063590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.063890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.063900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.064285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.064294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.064714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.064724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.065104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.065113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.065406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.065416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.065828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.065837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.066221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.066230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.066677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.066686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.067069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.067079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.067295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.067305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.067706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.067716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.068115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.068131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.068532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.068541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.068988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.068997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.756 qpair failed and we were unable to recover it. 00:29:41.756 [2024-07-15 20:24:39.069535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.756 [2024-07-15 20:24:39.069572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.070010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.070022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.070436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.070447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.070879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.070889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.071390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.071426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.071722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.071735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.072147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.072162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.072562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.072571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.072995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.073005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.073431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.073441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.073756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.073765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.074214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.074224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.074624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.074633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.075040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.075050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.075323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.075333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.075655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.075665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.075896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.075907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.076314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.076323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.076621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.076631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.077093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.077102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.077518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.077528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.077908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.077918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.078297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.078306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.078709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.078719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.078976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.078986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.079418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.079428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.079810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.079819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.080218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.080228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.080622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.080632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.081029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.081039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.081468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.081477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.081882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.081891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.082128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.082138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.082543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.082554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.082946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.082956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.083534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.083571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.083864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.083876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.084430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.084466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.084865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.084876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.085376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.085413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.085848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.085860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.757 [2024-07-15 20:24:39.086339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.757 [2024-07-15 20:24:39.086376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.757 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.086618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.086629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.087049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.087059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.087456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.087466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.087853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.087863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.088292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.088302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.088686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.088697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.089141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.089152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.089628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.089637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.090022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.090030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.090428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.090437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.090820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.090830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.091131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.091144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.091603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.091612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.092029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.092038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.092439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.092449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.092858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.092867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.093066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.093075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.093366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.093376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.093816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.093825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.094212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.094222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.094631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.094641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.094964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.094973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.095393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.095402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.095788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.095797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.096187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.096204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.096623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.096633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.097021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.097030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.097247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.097256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.097669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.097677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.098059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.098069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.098291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.098301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.098569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.098578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.098832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.098843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.099077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.099086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.099530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.099540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.758 [2024-07-15 20:24:39.099963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.758 [2024-07-15 20:24:39.099972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.758 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.100376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.100385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.100603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.100612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.101007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.101016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.101430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.101439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.101651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.101664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.101965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.101974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.102181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.102191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.102601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.102611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.103052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.103062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.103377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.103387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.103810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.103820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.104223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.104233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.104325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.104333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.104711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.104720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.104922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.104931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.105349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.105358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.105743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.105752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.106135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.106144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.106403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.106413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.106810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.106820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.107229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.107238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.107658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.107668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.107889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.107900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.108310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.108322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.108710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.108719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.109100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.109109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.109543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.109553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.109962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.109972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.110326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.110335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.110717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.110726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.111152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.111162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.111544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.111554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.111962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.111971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.112378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.112388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.112774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.112783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.113067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.113077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.113364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.113374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.113592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.113601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.113938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.113948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.114031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.759 [2024-07-15 20:24:39.114042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.759 qpair failed and we were unable to recover it. 00:29:41.759 [2024-07-15 20:24:39.114235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.114245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.114627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.114636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.115049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.115058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.115457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.115467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.115874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.115884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.116274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.116284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.116545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.116555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.117013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.117022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.117227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.117237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.117608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.117617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.117991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.118002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.118262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.118272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.118489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.118498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.118910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.118919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.119301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.119310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.119758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.119768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.120170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.120181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.120381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.120390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.120874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.120884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.121303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.121312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.121607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.121616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.121816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.121825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.122150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.122159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.122505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.122514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.122920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.122929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.123322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.123331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.123741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.123750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.124007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.124017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.124234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.124244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.124676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.124685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.125077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.125086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.125473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.125483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.125559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.760 [2024-07-15 20:24:39.125567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.760 qpair failed and we were unable to recover it. 00:29:41.760 [2024-07-15 20:24:39.125927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.125936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.126344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.126354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.126639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.126650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.126860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.126869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.127125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.127135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.127512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.127521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.127611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.127619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.128025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.128034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.128263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.128275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.128532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.128542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.128959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.128968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.129350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.129361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.129667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.129676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.130079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.130088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.130314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.130324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.130513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.130523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.130940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.130950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.131350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.131360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.131758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.131768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.132173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.132182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.132595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.132604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.132995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.133004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.133390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.133400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.133670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.133679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.134077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.134086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.134382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.134393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.134812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.134821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.135040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.135049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.135430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.135439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.135824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.135834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.135986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.135995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.136301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.136310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.136607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.136616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.137011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.137020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.137477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.137486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.137865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.137875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.761 qpair failed and we were unable to recover it. 00:29:41.761 [2024-07-15 20:24:39.138092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.761 [2024-07-15 20:24:39.138102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.138286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.138295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.138719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.138728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.139142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.139152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.139556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.139565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.139861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.139870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.140282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.140292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.140697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.140706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.141094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.141103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.141489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.141500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.141886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.141895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.142323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.142333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.142721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.142729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.143062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.143072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.143476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.143486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.143573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.143582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1593220 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.144094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.144192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.144743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.144830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.145447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.145535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.146060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.146093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.146664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.146751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.147384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.147471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.147998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.148031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.148596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.148683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.149404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.149491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.149811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.149845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.150295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.150326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.150809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.150837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.151251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.151279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.151440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.151467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.151746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.151773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.152047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.152075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.152578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.152606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.153037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.153064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.153385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.153413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.153810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.153837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.154290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.154321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.154582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.154608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.154897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.154924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.762 qpair failed and we were unable to recover it. 00:29:41.762 [2024-07-15 20:24:39.155231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.762 [2024-07-15 20:24:39.155260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.155672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.155699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.156052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.156088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.156459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.156489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.156846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.156873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.157154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.157181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.157606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.157633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.158082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.158109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.158559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.158586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.158888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.158915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.159391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.159428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.159847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.159874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.160320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.160348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.160803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.160830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.161293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.161321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.161665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.161692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.162027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.162053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.162495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.162524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.162760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.162787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.163137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.163165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:41.763 [2024-07-15 20:24:39.163511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.763 [2024-07-15 20:24:39.163542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:41.763 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.163897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.163928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.164361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.164391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.164628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.164654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.164996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.165024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.165523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.165551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.165777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.165803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.166269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.166297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.166609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.166639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.167108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.167144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.167565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.167592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.168035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.168063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.168496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.168524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.168957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.168984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.169410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.169437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.169673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.169698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.170035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.170062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.170537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.170567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.170804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.170831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.171244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.171272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.171516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.171542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.171676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.171706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.172188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.172216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.172655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.172682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.173139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.173167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.173514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.173541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.173908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.173936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.174183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.174211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.174632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.174659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.174792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.174819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.175235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.175272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.175605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.175632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.176076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.176103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.176568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.176596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.176851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.176884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.177109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.177146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.035 qpair failed and we were unable to recover it. 00:29:42.035 [2024-07-15 20:24:39.177687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.035 [2024-07-15 20:24:39.177714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.178159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.178188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.178464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.178491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.178848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.178876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.179216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.179244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.179689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.179715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.179974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.180001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.180510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.180538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.180788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.180814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.181264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.181292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.181617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.181644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.182072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.182099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.182549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.182576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.183009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.183036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.183388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.183416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.183889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.183916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.184359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.184386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.184680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.184707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.184946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.184973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.185235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.185262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.185797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.185824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.186273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.186302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.186600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.186627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.187063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.187091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.187409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.187441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.187789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.187817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.188253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.188281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.188718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.188745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.189191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.189220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.189688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.189715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.190165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.190193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.190639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.190666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.191082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.191109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.191443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.191472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.191912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.191946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.192184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.192211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.192665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.192692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.193199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.193227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.193688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.193715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.194158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.194187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.036 [2024-07-15 20:24:39.194622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.036 [2024-07-15 20:24:39.194649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.036 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.194999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.195025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.195253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.195280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.195617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.195644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.196088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.196115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.196544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.196573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.196869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.196897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.197347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.197375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.197710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.197741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.198157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.198186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.198517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.198544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.198977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.199004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.199437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.199465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.199906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.199933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.200190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.200216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.200681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.200708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.201181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.201209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.201634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.201661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.202094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.202143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.202602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.202629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.203031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.203060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.203271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.203300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.203541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.203568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.204006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.204033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.204467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.204495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.204984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.205012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.205447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.205475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.205918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.205946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.206367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.206395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.206741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.206768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.207187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.207215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.207655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.207682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.208132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.208161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.208600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.208627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.209073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.209106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.209442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.209471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.209748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.209775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.210108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.210159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.210600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.210627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.211075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.211101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.211533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.037 [2024-07-15 20:24:39.211560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.037 qpair failed and we were unable to recover it. 00:29:42.037 [2024-07-15 20:24:39.211988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.212014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.212492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.212520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.212960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.212987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.213440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.213468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.213887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.213913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.214348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.214376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.214824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.214851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.215291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.215319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.215555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.215581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.216003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.216030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.216466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.216494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.216822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.216853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.217183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.217213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.217665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.217693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.218117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.218166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.218597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.218624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.219056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.219083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.219572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.219600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.219976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.220002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.220325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.220355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.220802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.220831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.221292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.221320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.221763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.221790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.222172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.222200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.222464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.222493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.222901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.222930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.223369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.223396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.223845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.223872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.224300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.224327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.224581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.224610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.225063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.225091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.225574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.225602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.226042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.226069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.226507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.226537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.226986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.227014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.227312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.227340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.227773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.227800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.228233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.228262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.228790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.228817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.229275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.229303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.229631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.038 [2024-07-15 20:24:39.229658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.038 qpair failed and we were unable to recover it. 00:29:42.038 [2024-07-15 20:24:39.230132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.230160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.230667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.230695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.231139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.231166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.231477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.231504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.231936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.231962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.232296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.232325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.232856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.232884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.233322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.233350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.233780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.233807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.234235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.234263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.234501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.234528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.234977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.235003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.235444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.235472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.235920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.235948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.236393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.236421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.236846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.236874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.237109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.237143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.237477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.237506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.237918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.237945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.238444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.238477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.238930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.238957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.239407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.239495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.240020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.240054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.240475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.240506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.240942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.240970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.241397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.241425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.241870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.241898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.242197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.242227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.242684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.242711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.242917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.242949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.243389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.243418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.243852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.039 [2024-07-15 20:24:39.243880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.039 qpair failed and we were unable to recover it. 00:29:42.039 [2024-07-15 20:24:39.244309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.244338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.244786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.244815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.245254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.245283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.245640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.245667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.245987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.246017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.246260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.246289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.246699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.246726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.247243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.247270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.247742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.247769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.248104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.248143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.248609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.248636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.249012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.249039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.249464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.249493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.249786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.249814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.250260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.250290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.250751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.250779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.251230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.251258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.251721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.251749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.252013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.252039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.252368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.252402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.252903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.252930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.253356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.253385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.253828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.253855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.254306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.254334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.254601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.254629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.254966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.254993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.255387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.255414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.255876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.255915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.256360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.256388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.256829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.256856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.257191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.257224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.257721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.257750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.258183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.258212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.258661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.258689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.258993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.259020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.259445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.259473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.259732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.259760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.260166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.260194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.260643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.260670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.261000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.261027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.261468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.040 [2024-07-15 20:24:39.261497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.040 qpair failed and we were unable to recover it. 00:29:42.040 [2024-07-15 20:24:39.261940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.261967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.262401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.262428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.262877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.262904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.263342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.263370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.263808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.263835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.263952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.263980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.264398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.264426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.264860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.264887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.265143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.265171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.265671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.265698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.265937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.265963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.266418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.266447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.266894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.266921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.267258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.267292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.267712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.267740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.268182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.268210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.268668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.268696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.268915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.268941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.269223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.269251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.269698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.269725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.270174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.270202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.270520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.270549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.270803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.270829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.271273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.271302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.271734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.271761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.272100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.272136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.272370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.272403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.272736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.272763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.273108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.273149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.273627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.273655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.274092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.274118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.274508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.274536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.274870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.274898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.275329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.275358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.275806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.275833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.276092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.276120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.276481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.276509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.276955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.276982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.277414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.277442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.277872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.277899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.041 [2024-07-15 20:24:39.278343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.041 [2024-07-15 20:24:39.278372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.041 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.278814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.278841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.279269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.279298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.279759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.279786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.280220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.280248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.280764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.280790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.281255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.281282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.281553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.281579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.281830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.281857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.282132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.282160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.282441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.282468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.282766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.282793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.283347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.283375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.283827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.283855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.284294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.284322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.284737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.284763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.285240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.285267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.285709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.285736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.286070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.286097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.286606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.286635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.287062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.287089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.287535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.287564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.287784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.287810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.288232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.288260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.288680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.288706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.289069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.289097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.289531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.289565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.290016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.290044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.290288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.290316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.290655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.290682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.291086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.291113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.291566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.291594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.292016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.292044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.292480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.292509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.292965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.292992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.293444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.293472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.293925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.293952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.294289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.294317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.294758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.294786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.295225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.295253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.295603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.042 [2024-07-15 20:24:39.295630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.042 qpair failed and we were unable to recover it. 00:29:42.042 [2024-07-15 20:24:39.296053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.296079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.296412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.296443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.296911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.296938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.297175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.297202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.297659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.297686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.298158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.298187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.298649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.298676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.299029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.299057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.299303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.299333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.299562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.299590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.299826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.299853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.300288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.300315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.300781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.300810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.301223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.301251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.301678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.301705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.302154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.302182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.302443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.302469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.302968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.302996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.303226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.303254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.303580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.303607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.304038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.304065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.304501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.304529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.304969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.304997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.305301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.305330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.305779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.305807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.306236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.306271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.306709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.306736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.307179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.307208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.307669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.307696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.308139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.308167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.308663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.308690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.309142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.309170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.309408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.309435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.309890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.309917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.310161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.043 [2024-07-15 20:24:39.310190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.043 qpair failed and we were unable to recover it. 00:29:42.043 [2024-07-15 20:24:39.310623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.310650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.310966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.310995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.311435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.311463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.311800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.311826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.312282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.312311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.312645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.312672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.313134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.313162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.313629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.313656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.314096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.314145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.314574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.314602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.315059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.315086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.315529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.315557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.316049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.316076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.316337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.316365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.316815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.316842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.317172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.317200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.317668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.317695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.318140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.318168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.318633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.318660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.318916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.318942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.319077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.319105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.319371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.319398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.319745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.319777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.320206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.320234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.320702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.320730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.321185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.321214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.321636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.321665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.322087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.322116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.322255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.322284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.322621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.322650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.323007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.323041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.323480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.323509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.323808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.323836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.324177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.324206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.324443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.324470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.324919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.324945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.325396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.325425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.325875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.325903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.326243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.326274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.326741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.326769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.044 [2024-07-15 20:24:39.327214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.044 [2024-07-15 20:24:39.327243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.044 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.327704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.327730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.328187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.328215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.328348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.328375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.328835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.328863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.329195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.329223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.329530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.329559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.330009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.330036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.330463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.330491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.330937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.330964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.331400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.331428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.331852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.331879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.332315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.332343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.332792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.332820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.333162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.333191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.333656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.333683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.334152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.334181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.334441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.334468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.334942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.334969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.335506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.335533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.335789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.335815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.336262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.336290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.336725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.336752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.337191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.337219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.337486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.337512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.337937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.337964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.338399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.338426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.338862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.338889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.339346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.339373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.339828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.339855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.340308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.340347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.340798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.340825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.341271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.341301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.341796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.341823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.342137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.342165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.342491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.342523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.342939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.342967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.343284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.343312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.343781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.343808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.344313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.344340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.344786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.344814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.045 [2024-07-15 20:24:39.345308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.045 [2024-07-15 20:24:39.345337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.045 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.345769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.345796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.346231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.346259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.346594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.346622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.347050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.347077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.347512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.347540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.347973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.347999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.348432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.348460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.348909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.348935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.349254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.349284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.349754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.349782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.350334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.350421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.350818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.350853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.351288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.351319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.351560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.351587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.352030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.352058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.352520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.352550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.352995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.353022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.353512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.353542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.353995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.354023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.354464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.354493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.354943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.354971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.355405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.355433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.355671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.355698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.356161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.356189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.356618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.356645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.357080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.357107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.357550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.357578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.357897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.357927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.358369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.358406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.358670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.358706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.359012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.359040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.359486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.359515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.359961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.359988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.360461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.360489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.360783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.360811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.361305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.361333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.361833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.361860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.362311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.362339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.362770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.362797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.363245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.363272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.046 [2024-07-15 20:24:39.363729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.046 [2024-07-15 20:24:39.363756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.046 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.363935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.363961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.364439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.364468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.364721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.364747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.365195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.365224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.365598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.365625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.366100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.366147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.366585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.366612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.366844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.366870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.367195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.367229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.367662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.367690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.368145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.368174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.368649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.368677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.368919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.368945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.369323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.369351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.369650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.369678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.370183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.370211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.370696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.370723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.371175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.371203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.371455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.371481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.371823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.371850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.372287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.372314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.372509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.372535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.372853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.372880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.373330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.373357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.373847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.373874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.374331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.374360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.374706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.374737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.375207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.375242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.375695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.375722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.376157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.376186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.376306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.376333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.376572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.376599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.377001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.377028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 [2024-07-15 20:24:39.377509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.377537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:42.047 [2024-07-15 20:24:39.377982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.378010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:42.047 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:42.047 [2024-07-15 20:24:39.378503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.047 [2024-07-15 20:24:39.378530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.047 qpair failed and we were unable to recover it. 00:29:42.047 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:42.048 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.048 [2024-07-15 20:24:39.379042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.379069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.379544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.379573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.379984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.380011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.380267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.380295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.380719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.380747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.381180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.381208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.381677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.381704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.381998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.382025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.382474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.382502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.382840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.382867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.383301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.383330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.383752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.383778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.384050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.384077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.384452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.384481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.384910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.384937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.385220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.385247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.385497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.385526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.385976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.386003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.386367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.386396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.386688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.386716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.387173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.387201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.387638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.387665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.388117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.388151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.388517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.388545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.388962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.388989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.389446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.389474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.389782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.389810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.390265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.390293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.390600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.390627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.391090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.391130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.391556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.391583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.392018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.392044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.392477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.392504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.392744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.392769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.393176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.393204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.393679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.393707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.394140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.394169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.048 [2024-07-15 20:24:39.394634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.048 [2024-07-15 20:24:39.394660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.048 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.395109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.395144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.395584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.395610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.396054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.396081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.396540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.396568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.397017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.397043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.397474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.397502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.397936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.397962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.398397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.398425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.398758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.398784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.399139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.399167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.399639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.399666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.400094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.400128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.400352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.400379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.400786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.400813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.401160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.401188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.401613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.401639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.401944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.401971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.402234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.402261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.402515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.402544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.402983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.403010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.403340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.403368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.403793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.403820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.404290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.404317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.404764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.404792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.405242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.405271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.405700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.405727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.406087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.406114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.406575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.406604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.406859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.406885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.407334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.407363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.407812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.407839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.408172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.408206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.408659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.408687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.409023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.409053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.409489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.409518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.409953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.409980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.410417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.410444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.410781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.410807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.410928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.410954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.411395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.411423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.049 qpair failed and we were unable to recover it. 00:29:42.049 [2024-07-15 20:24:39.411875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.049 [2024-07-15 20:24:39.411903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.412203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.412230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.412650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.412677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.412914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.412941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.413372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.413400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.413921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.413948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.414391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.414419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.414684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.414713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.415050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.415076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.415524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.415551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.415984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.416012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.416356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.416383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.416830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.416857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.050 [2024-07-15 20:24:39.417364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.417393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:42.050 [2024-07-15 20:24:39.417852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.417879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.050 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.050 [2024-07-15 20:24:39.418338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.418366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.418812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.418845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.419287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.419315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.419750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.419777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.420214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.420241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.420687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.420713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.420951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.420977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.421293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.421325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.421573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.421600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.421919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.421946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.422406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.422433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.422773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.422803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.423237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.423265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.423769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.423796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.424239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.424266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.424728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.424756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.425147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.425176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.425605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.425633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.425935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.425961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.426434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.426463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.426806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.426833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.427287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.427315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.427776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.427803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.428260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.428287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.050 [2024-07-15 20:24:39.428541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.050 [2024-07-15 20:24:39.428567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.050 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.428906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.428933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.429381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.429410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.429848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.429875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.430335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.430364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.430804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.430832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.431201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.431232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.431699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.431726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.432384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.432474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.433003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.433039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.433536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.433569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.433868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.433896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.434333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.434363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.434827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.434855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.435295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.435323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.435784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.435812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.436254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.436283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 Malloc0 00:29:42.051 [2024-07-15 20:24:39.436737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.436776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.437289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.437318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.051 [2024-07-15 20:24:39.437655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.437684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:42.051 [2024-07-15 20:24:39.438110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.438149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.051 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.051 [2024-07-15 20:24:39.438585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.438614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.438954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.438989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.439446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.439474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.439819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.439848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.440314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.440309] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.051 [2024-07-15 20:24:39.440343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.440785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.440812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.441199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.441228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.441655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.441683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.442041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.442068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.442522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.442551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.443010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.443038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.443383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.443412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.443840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.443867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.444129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.444156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.444428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.444456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.444754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.444781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.445110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.445147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.445470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.445500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.051 [2024-07-15 20:24:39.445956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.051 [2024-07-15 20:24:39.445984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.051 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.446420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.446449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.446903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.446929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.447366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.447396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.447672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.447700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.448153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.448182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.448722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.448750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.052 [2024-07-15 20:24:39.449183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.449212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:42.052 [2024-07-15 20:24:39.449670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.449697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.052 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.052 [2024-07-15 20:24:39.450107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.450145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.450403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.450430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.450866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.450893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.451329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.451358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.451804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.451831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.452099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.452133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.452463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.452491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.452869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.452896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.453239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.453267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.453728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.453755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.454204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.454233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.454578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.454606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.455039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.455066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.052 [2024-07-15 20:24:39.455365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.052 [2024-07-15 20:24:39.455393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.052 qpair failed and we were unable to recover it. 00:29:42.316 [2024-07-15 20:24:39.455825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.316 [2024-07-15 20:24:39.455854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.316 qpair failed and we were unable to recover it. 00:29:42.316 [2024-07-15 20:24:39.456313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.316 [2024-07-15 20:24:39.456341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.316 qpair failed and we were unable to recover it. 00:29:42.316 [2024-07-15 20:24:39.456662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.316 [2024-07-15 20:24:39.456689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.316 qpair failed and we were unable to recover it. 00:29:42.316 [2024-07-15 20:24:39.456951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.316 [2024-07-15 20:24:39.456978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.316 qpair failed and we were unable to recover it. 00:29:42.316 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.316 [2024-07-15 20:24:39.457413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.316 [2024-07-15 20:24:39.457442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.316 qpair failed and we were unable to recover it. 00:29:42.316 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:42.316 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.316 [2024-07-15 20:24:39.457906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.316 [2024-07-15 20:24:39.457934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.316 qpair failed and we were unable to recover it. 00:29:42.316 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.316 [2024-07-15 20:24:39.458139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.316 [2024-07-15 20:24:39.458170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.316 qpair failed and we were unable to recover it. 00:29:42.316 [2024-07-15 20:24:39.458641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.316 [2024-07-15 20:24:39.458669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.316 qpair failed and we were unable to recover it. 00:29:42.316 [2024-07-15 20:24:39.459051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.316 [2024-07-15 20:24:39.459078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.316 qpair failed and we were unable to recover it. 00:29:42.316 [2024-07-15 20:24:39.459564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.316 [2024-07-15 20:24:39.459592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.316 qpair failed and we were unable to recover it. 00:29:42.316 [2024-07-15 20:24:39.460025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.316 [2024-07-15 20:24:39.460052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.316 qpair failed and we were unable to recover it. 00:29:42.316 [2024-07-15 20:24:39.460500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.316 [2024-07-15 20:24:39.460529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.316 qpair failed and we were unable to recover it. 00:29:42.316 [2024-07-15 20:24:39.460964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.317 [2024-07-15 20:24:39.460991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.461462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.317 [2024-07-15 20:24:39.461489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.461935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.317 [2024-07-15 20:24:39.461963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.462213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.317 [2024-07-15 20:24:39.462242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.462702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.317 [2024-07-15 20:24:39.462730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.462976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.317 [2024-07-15 20:24:39.463004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.463446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.317 [2024-07-15 20:24:39.463475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.463928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.317 [2024-07-15 20:24:39.463955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.464389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.317 [2024-07-15 20:24:39.464418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.464756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.317 [2024-07-15 20:24:39.464783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.317 [2024-07-15 20:24:39.465146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.317 [2024-07-15 20:24:39.465184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.465440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.317 [2024-07-15 20:24:39.465467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.317 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.317 [2024-07-15 20:24:39.465911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.317 [2024-07-15 20:24:39.465938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.317 [2024-07-15 20:24:39.466375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.317 [2024-07-15 20:24:39.466405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.466851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.317 [2024-07-15 20:24:39.466878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.467338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.317 [2024-07-15 20:24:39.467367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.467803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.317 [2024-07-15 20:24:39.467838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.468292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.317 [2024-07-15 20:24:39.468321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9e78000b90 with addr=10.0.0.2, port=4420 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.468569] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.317 [2024-07-15 20:24:39.470988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.317 [2024-07-15 20:24:39.471184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.317 [2024-07-15 20:24:39.471232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.317 [2024-07-15 20:24:39.471253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.317 [2024-07-15 20:24:39.471273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.317 [2024-07-15 20:24:39.471323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.317 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:42.317 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.317 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.317 [2024-07-15 20:24:39.480973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.317 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.317 [2024-07-15 20:24:39.481236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.317 [2024-07-15 20:24:39.481280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.317 [2024-07-15 20:24:39.481301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.317 [2024-07-15 20:24:39.481319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.317 [2024-07-15 20:24:39.481363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 20:24:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1178049 00:29:42.317 [2024-07-15 20:24:39.491014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.317 [2024-07-15 20:24:39.491159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.317 [2024-07-15 20:24:39.491190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.317 [2024-07-15 20:24:39.491204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.317 [2024-07-15 20:24:39.491216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.317 [2024-07-15 20:24:39.491246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.500942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.317 [2024-07-15 20:24:39.501062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.317 [2024-07-15 20:24:39.501087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.317 [2024-07-15 20:24:39.501097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.317 [2024-07-15 20:24:39.501106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.317 [2024-07-15 20:24:39.501133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.510951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.317 [2024-07-15 20:24:39.511053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.317 [2024-07-15 20:24:39.511072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.317 [2024-07-15 20:24:39.511080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.317 [2024-07-15 20:24:39.511086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.317 [2024-07-15 20:24:39.511103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.521006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.317 [2024-07-15 20:24:39.521099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.317 [2024-07-15 20:24:39.521119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.317 [2024-07-15 20:24:39.521131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.317 [2024-07-15 20:24:39.521137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.317 [2024-07-15 20:24:39.521155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.317 qpair failed and we were unable to recover it. 00:29:42.317 [2024-07-15 20:24:39.531011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.317 [2024-07-15 20:24:39.531107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.317 [2024-07-15 20:24:39.531135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.318 [2024-07-15 20:24:39.531143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.318 [2024-07-15 20:24:39.531150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.318 [2024-07-15 20:24:39.531168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.318 qpair failed and we were unable to recover it. 00:29:42.318 [2024-07-15 20:24:39.541011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.318 [2024-07-15 20:24:39.541101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.318 [2024-07-15 20:24:39.541130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.318 [2024-07-15 20:24:39.541138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.318 [2024-07-15 20:24:39.541144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.318 [2024-07-15 20:24:39.541162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.318 qpair failed and we were unable to recover it. 00:29:42.318 [2024-07-15 20:24:39.551049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.318 [2024-07-15 20:24:39.551161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.318 [2024-07-15 20:24:39.551183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.318 [2024-07-15 20:24:39.551191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.318 [2024-07-15 20:24:39.551198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.318 [2024-07-15 20:24:39.551219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.318 qpair failed and we were unable to recover it. 00:29:42.318 [2024-07-15 20:24:39.561030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.318 [2024-07-15 20:24:39.561147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.318 [2024-07-15 20:24:39.561169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.318 [2024-07-15 20:24:39.561177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.318 [2024-07-15 20:24:39.561183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.318 [2024-07-15 20:24:39.561201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.318 qpair failed and we were unable to recover it. 00:29:42.318 [2024-07-15 20:24:39.571062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.318 [2024-07-15 20:24:39.571183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.318 [2024-07-15 20:24:39.571204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.318 [2024-07-15 20:24:39.571212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.318 [2024-07-15 20:24:39.571218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.318 [2024-07-15 20:24:39.571236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.318 qpair failed and we were unable to recover it. 00:29:42.318 [2024-07-15 20:24:39.581171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.318 [2024-07-15 20:24:39.581270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.318 [2024-07-15 20:24:39.581293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.318 [2024-07-15 20:24:39.581301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.318 [2024-07-15 20:24:39.581308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.318 [2024-07-15 20:24:39.581332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.318 qpair failed and we were unable to recover it. 00:29:42.318 [2024-07-15 20:24:39.591199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.318 [2024-07-15 20:24:39.591305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.318 [2024-07-15 20:24:39.591328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.318 [2024-07-15 20:24:39.591336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.318 [2024-07-15 20:24:39.591342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.318 [2024-07-15 20:24:39.591360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.318 qpair failed and we were unable to recover it. 00:29:42.318 [2024-07-15 20:24:39.601130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.318 [2024-07-15 20:24:39.601223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.318 [2024-07-15 20:24:39.601249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.318 [2024-07-15 20:24:39.601258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.318 [2024-07-15 20:24:39.601264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.318 [2024-07-15 20:24:39.601284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.318 qpair failed and we were unable to recover it. 00:29:42.318 [2024-07-15 20:24:39.611242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.318 [2024-07-15 20:24:39.611332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.318 [2024-07-15 20:24:39.611355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.318 [2024-07-15 20:24:39.611363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.318 [2024-07-15 20:24:39.611370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.318 [2024-07-15 20:24:39.611388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.318 qpair failed and we were unable to recover it. 00:29:42.318 [2024-07-15 20:24:39.621274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.318 [2024-07-15 20:24:39.621375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.318 [2024-07-15 20:24:39.621401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.318 [2024-07-15 20:24:39.621409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.318 [2024-07-15 20:24:39.621416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.318 [2024-07-15 20:24:39.621440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.318 qpair failed and we were unable to recover it. 00:29:42.318 [2024-07-15 20:24:39.631361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.318 [2024-07-15 20:24:39.631471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.318 [2024-07-15 20:24:39.631503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.318 [2024-07-15 20:24:39.631511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.318 [2024-07-15 20:24:39.631517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.318 [2024-07-15 20:24:39.631538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.318 qpair failed and we were unable to recover it. 00:29:42.318 [2024-07-15 20:24:39.641267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.318 [2024-07-15 20:24:39.641362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.318 [2024-07-15 20:24:39.641389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.318 [2024-07-15 20:24:39.641398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.318 [2024-07-15 20:24:39.641404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.318 [2024-07-15 20:24:39.641424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.318 qpair failed and we were unable to recover it. 00:29:42.318 [2024-07-15 20:24:39.651395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.318 [2024-07-15 20:24:39.651532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.318 [2024-07-15 20:24:39.651559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.318 [2024-07-15 20:24:39.651569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.318 [2024-07-15 20:24:39.651576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.318 [2024-07-15 20:24:39.651595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.318 qpair failed and we were unable to recover it. 00:29:42.318 [2024-07-15 20:24:39.661411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.318 [2024-07-15 20:24:39.661508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.318 [2024-07-15 20:24:39.661535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.318 [2024-07-15 20:24:39.661543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.318 [2024-07-15 20:24:39.661549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.318 [2024-07-15 20:24:39.661569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.318 qpair failed and we were unable to recover it. 00:29:42.318 [2024-07-15 20:24:39.671461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.318 [2024-07-15 20:24:39.671582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.318 [2024-07-15 20:24:39.671609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.318 [2024-07-15 20:24:39.671617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.318 [2024-07-15 20:24:39.671630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.319 [2024-07-15 20:24:39.671650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.319 qpair failed and we were unable to recover it. 00:29:42.319 [2024-07-15 20:24:39.681455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.319 [2024-07-15 20:24:39.681568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.319 [2024-07-15 20:24:39.681596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.319 [2024-07-15 20:24:39.681603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.319 [2024-07-15 20:24:39.681610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.319 [2024-07-15 20:24:39.681630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.319 qpair failed and we were unable to recover it. 00:29:42.319 [2024-07-15 20:24:39.691529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.319 [2024-07-15 20:24:39.691643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.319 [2024-07-15 20:24:39.691670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.319 [2024-07-15 20:24:39.691677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.319 [2024-07-15 20:24:39.691684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.319 [2024-07-15 20:24:39.691704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.319 qpair failed and we were unable to recover it. 00:29:42.319 [2024-07-15 20:24:39.701514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.319 [2024-07-15 20:24:39.701609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.319 [2024-07-15 20:24:39.701636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.319 [2024-07-15 20:24:39.701643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.319 [2024-07-15 20:24:39.701649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.319 [2024-07-15 20:24:39.701668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.319 qpair failed and we were unable to recover it. 00:29:42.319 [2024-07-15 20:24:39.711562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.319 [2024-07-15 20:24:39.711684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.319 [2024-07-15 20:24:39.711711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.319 [2024-07-15 20:24:39.711719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.319 [2024-07-15 20:24:39.711725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.319 [2024-07-15 20:24:39.711745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.319 qpair failed and we were unable to recover it. 00:29:42.319 [2024-07-15 20:24:39.721580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.319 [2024-07-15 20:24:39.721700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.319 [2024-07-15 20:24:39.721727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.319 [2024-07-15 20:24:39.721735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.319 [2024-07-15 20:24:39.721741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.319 [2024-07-15 20:24:39.721760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.319 qpair failed and we were unable to recover it. 00:29:42.319 [2024-07-15 20:24:39.731598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.319 [2024-07-15 20:24:39.731704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.319 [2024-07-15 20:24:39.731731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.319 [2024-07-15 20:24:39.731738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.319 [2024-07-15 20:24:39.731745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.319 [2024-07-15 20:24:39.731765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.319 qpair failed and we were unable to recover it. 00:29:42.319 [2024-07-15 20:24:39.741630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.319 [2024-07-15 20:24:39.741757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.319 [2024-07-15 20:24:39.741797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.319 [2024-07-15 20:24:39.741808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.319 [2024-07-15 20:24:39.741815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.319 [2024-07-15 20:24:39.741840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.319 qpair failed and we were unable to recover it. 00:29:42.582 [2024-07-15 20:24:39.751685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.582 [2024-07-15 20:24:39.751796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.582 [2024-07-15 20:24:39.751836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.582 [2024-07-15 20:24:39.751846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.582 [2024-07-15 20:24:39.751853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.582 [2024-07-15 20:24:39.751879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.582 qpair failed and we were unable to recover it. 00:29:42.582 [2024-07-15 20:24:39.761731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.582 [2024-07-15 20:24:39.761834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.582 [2024-07-15 20:24:39.761875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.582 [2024-07-15 20:24:39.761885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.582 [2024-07-15 20:24:39.761900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.582 [2024-07-15 20:24:39.761926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.582 qpair failed and we were unable to recover it. 00:29:42.582 [2024-07-15 20:24:39.771865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.582 [2024-07-15 20:24:39.771981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.582 [2024-07-15 20:24:39.772021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.582 [2024-07-15 20:24:39.772032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.582 [2024-07-15 20:24:39.772039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.582 [2024-07-15 20:24:39.772065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.582 qpair failed and we were unable to recover it. 00:29:42.582 [2024-07-15 20:24:39.781855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.582 [2024-07-15 20:24:39.781971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.582 [2024-07-15 20:24:39.782000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.582 [2024-07-15 20:24:39.782009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.582 [2024-07-15 20:24:39.782015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.582 [2024-07-15 20:24:39.782037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.582 qpair failed and we were unable to recover it. 00:29:42.582 [2024-07-15 20:24:39.791807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.582 [2024-07-15 20:24:39.791917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.582 [2024-07-15 20:24:39.791945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.582 [2024-07-15 20:24:39.791953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.582 [2024-07-15 20:24:39.791959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.582 [2024-07-15 20:24:39.791980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.582 qpair failed and we were unable to recover it. 00:29:42.582 [2024-07-15 20:24:39.801870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.582 [2024-07-15 20:24:39.801976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.582 [2024-07-15 20:24:39.802005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.582 [2024-07-15 20:24:39.802013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.582 [2024-07-15 20:24:39.802023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.582 [2024-07-15 20:24:39.802044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.582 qpair failed and we were unable to recover it. 00:29:42.582 [2024-07-15 20:24:39.811834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.582 [2024-07-15 20:24:39.811931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.582 [2024-07-15 20:24:39.811959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.582 [2024-07-15 20:24:39.811968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.582 [2024-07-15 20:24:39.811974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.582 [2024-07-15 20:24:39.811995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.582 qpair failed and we were unable to recover it. 00:29:42.582 [2024-07-15 20:24:39.821863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.582 [2024-07-15 20:24:39.821962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.582 [2024-07-15 20:24:39.821989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.582 [2024-07-15 20:24:39.821997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.582 [2024-07-15 20:24:39.822004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.582 [2024-07-15 20:24:39.822025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.582 qpair failed and we were unable to recover it. 00:29:42.583 [2024-07-15 20:24:39.831889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.583 [2024-07-15 20:24:39.832026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.583 [2024-07-15 20:24:39.832053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.583 [2024-07-15 20:24:39.832063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.583 [2024-07-15 20:24:39.832069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.583 [2024-07-15 20:24:39.832090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.583 qpair failed and we were unable to recover it. 00:29:42.583 [2024-07-15 20:24:39.841933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.583 [2024-07-15 20:24:39.842033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.583 [2024-07-15 20:24:39.842061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.583 [2024-07-15 20:24:39.842069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.583 [2024-07-15 20:24:39.842075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.583 [2024-07-15 20:24:39.842096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.583 qpair failed and we were unable to recover it. 00:29:42.583 [2024-07-15 20:24:39.851949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.583 [2024-07-15 20:24:39.852049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.583 [2024-07-15 20:24:39.852076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.583 [2024-07-15 20:24:39.852091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.583 [2024-07-15 20:24:39.852098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.583 [2024-07-15 20:24:39.852119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.583 qpair failed and we were unable to recover it. 00:29:42.583 [2024-07-15 20:24:39.862049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.583 [2024-07-15 20:24:39.862193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.583 [2024-07-15 20:24:39.862221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.583 [2024-07-15 20:24:39.862228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.583 [2024-07-15 20:24:39.862235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.583 [2024-07-15 20:24:39.862254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.583 qpair failed and we were unable to recover it. 00:29:42.583 [2024-07-15 20:24:39.872012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.583 [2024-07-15 20:24:39.872150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.583 [2024-07-15 20:24:39.872177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.583 [2024-07-15 20:24:39.872186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.583 [2024-07-15 20:24:39.872192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.583 [2024-07-15 20:24:39.872212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.583 qpair failed and we were unable to recover it. 00:29:42.583 [2024-07-15 20:24:39.882030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.583 [2024-07-15 20:24:39.882137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.583 [2024-07-15 20:24:39.882165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.583 [2024-07-15 20:24:39.882174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.583 [2024-07-15 20:24:39.882180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.583 [2024-07-15 20:24:39.882201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.583 qpair failed and we were unable to recover it. 00:29:42.583 [2024-07-15 20:24:39.892092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.583 [2024-07-15 20:24:39.892198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.583 [2024-07-15 20:24:39.892225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.583 [2024-07-15 20:24:39.892233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.583 [2024-07-15 20:24:39.892239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.583 [2024-07-15 20:24:39.892260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.583 qpair failed and we were unable to recover it. 00:29:42.583 [2024-07-15 20:24:39.902118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.583 [2024-07-15 20:24:39.902231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.583 [2024-07-15 20:24:39.902259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.583 [2024-07-15 20:24:39.902267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.583 [2024-07-15 20:24:39.902273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.583 [2024-07-15 20:24:39.902294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.583 qpair failed and we were unable to recover it. 00:29:42.583 [2024-07-15 20:24:39.912157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.583 [2024-07-15 20:24:39.912273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.583 [2024-07-15 20:24:39.912299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.583 [2024-07-15 20:24:39.912307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.583 [2024-07-15 20:24:39.912314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.583 [2024-07-15 20:24:39.912334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.583 qpair failed and we were unable to recover it. 00:29:42.583 [2024-07-15 20:24:39.922182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.583 [2024-07-15 20:24:39.922281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.583 [2024-07-15 20:24:39.922307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.583 [2024-07-15 20:24:39.922315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.583 [2024-07-15 20:24:39.922321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.583 [2024-07-15 20:24:39.922341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.583 qpair failed and we were unable to recover it. 00:29:42.583 [2024-07-15 20:24:39.932201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.583 [2024-07-15 20:24:39.932298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.583 [2024-07-15 20:24:39.932324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.583 [2024-07-15 20:24:39.932332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.583 [2024-07-15 20:24:39.932339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.583 [2024-07-15 20:24:39.932360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.583 qpair failed and we were unable to recover it. 00:29:42.583 [2024-07-15 20:24:39.942209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.583 [2024-07-15 20:24:39.942313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.583 [2024-07-15 20:24:39.942345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.583 [2024-07-15 20:24:39.942353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.583 [2024-07-15 20:24:39.942360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.583 [2024-07-15 20:24:39.942380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.583 qpair failed and we were unable to recover it. 00:29:42.583 [2024-07-15 20:24:39.952291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.583 [2024-07-15 20:24:39.952417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.583 [2024-07-15 20:24:39.952443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.583 [2024-07-15 20:24:39.952451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.583 [2024-07-15 20:24:39.952457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.583 [2024-07-15 20:24:39.952478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.583 qpair failed and we were unable to recover it. 00:29:42.583 [2024-07-15 20:24:39.962281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.583 [2024-07-15 20:24:39.962370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.584 [2024-07-15 20:24:39.962397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.584 [2024-07-15 20:24:39.962405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.584 [2024-07-15 20:24:39.962412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.584 [2024-07-15 20:24:39.962432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.584 qpair failed and we were unable to recover it. 00:29:42.584 [2024-07-15 20:24:39.972305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.584 [2024-07-15 20:24:39.972405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.584 [2024-07-15 20:24:39.972432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.584 [2024-07-15 20:24:39.972440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.584 [2024-07-15 20:24:39.972446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.584 [2024-07-15 20:24:39.972466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.584 qpair failed and we were unable to recover it. 00:29:42.584 [2024-07-15 20:24:39.982364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.584 [2024-07-15 20:24:39.982461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.584 [2024-07-15 20:24:39.982488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.584 [2024-07-15 20:24:39.982496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.584 [2024-07-15 20:24:39.982502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.584 [2024-07-15 20:24:39.982535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.584 qpair failed and we were unable to recover it. 00:29:42.584 [2024-07-15 20:24:39.992400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.584 [2024-07-15 20:24:39.992515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.584 [2024-07-15 20:24:39.992542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.584 [2024-07-15 20:24:39.992550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.584 [2024-07-15 20:24:39.992556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.584 [2024-07-15 20:24:39.992576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.584 qpair failed and we were unable to recover it. 00:29:42.584 [2024-07-15 20:24:40.002783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.584 [2024-07-15 20:24:40.002885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.584 [2024-07-15 20:24:40.002914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.584 [2024-07-15 20:24:40.002923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.584 [2024-07-15 20:24:40.002930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.584 [2024-07-15 20:24:40.002950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.584 qpair failed and we were unable to recover it. 00:29:42.847 [2024-07-15 20:24:40.012491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.847 [2024-07-15 20:24:40.012621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.847 [2024-07-15 20:24:40.012662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.847 [2024-07-15 20:24:40.012673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.847 [2024-07-15 20:24:40.012681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.847 [2024-07-15 20:24:40.012710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.847 qpair failed and we were unable to recover it. 00:29:42.847 [2024-07-15 20:24:40.022479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.847 [2024-07-15 20:24:40.022586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.847 [2024-07-15 20:24:40.022626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.847 [2024-07-15 20:24:40.022637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.847 [2024-07-15 20:24:40.022644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.847 [2024-07-15 20:24:40.022670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.847 qpair failed and we were unable to recover it. 00:29:42.847 [2024-07-15 20:24:40.032522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.847 [2024-07-15 20:24:40.032639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.847 [2024-07-15 20:24:40.032678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.847 [2024-07-15 20:24:40.032689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.847 [2024-07-15 20:24:40.032696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.847 [2024-07-15 20:24:40.032718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.847 qpair failed and we were unable to recover it. 00:29:42.847 [2024-07-15 20:24:40.042538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.847 [2024-07-15 20:24:40.042652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.847 [2024-07-15 20:24:40.042680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.847 [2024-07-15 20:24:40.042689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.847 [2024-07-15 20:24:40.042696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.847 [2024-07-15 20:24:40.042716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.847 qpair failed and we were unable to recover it. 00:29:42.847 [2024-07-15 20:24:40.052478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.847 [2024-07-15 20:24:40.052615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.847 [2024-07-15 20:24:40.052653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.847 [2024-07-15 20:24:40.052665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.847 [2024-07-15 20:24:40.052675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.848 [2024-07-15 20:24:40.052706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-07-15 20:24:40.062634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.848 [2024-07-15 20:24:40.062740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.848 [2024-07-15 20:24:40.062770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.848 [2024-07-15 20:24:40.062779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.848 [2024-07-15 20:24:40.062785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.848 [2024-07-15 20:24:40.062809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-07-15 20:24:40.072618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.848 [2024-07-15 20:24:40.072723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.848 [2024-07-15 20:24:40.072751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.848 [2024-07-15 20:24:40.072759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.848 [2024-07-15 20:24:40.072774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.848 [2024-07-15 20:24:40.072796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-07-15 20:24:40.082668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.848 [2024-07-15 20:24:40.082791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.848 [2024-07-15 20:24:40.082831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.848 [2024-07-15 20:24:40.082842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.848 [2024-07-15 20:24:40.082849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.848 [2024-07-15 20:24:40.082876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-07-15 20:24:40.092637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.848 [2024-07-15 20:24:40.092737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.848 [2024-07-15 20:24:40.092777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.848 [2024-07-15 20:24:40.092788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.848 [2024-07-15 20:24:40.092795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.848 [2024-07-15 20:24:40.092822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-07-15 20:24:40.102738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.848 [2024-07-15 20:24:40.102845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.848 [2024-07-15 20:24:40.102885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.848 [2024-07-15 20:24:40.102896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.848 [2024-07-15 20:24:40.102903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.848 [2024-07-15 20:24:40.102930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-07-15 20:24:40.112734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.848 [2024-07-15 20:24:40.112845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.848 [2024-07-15 20:24:40.112884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.848 [2024-07-15 20:24:40.112894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.848 [2024-07-15 20:24:40.112902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.848 [2024-07-15 20:24:40.112928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-07-15 20:24:40.122841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.848 [2024-07-15 20:24:40.122957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.848 [2024-07-15 20:24:40.122987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.848 [2024-07-15 20:24:40.122996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.848 [2024-07-15 20:24:40.123003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.848 [2024-07-15 20:24:40.123024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-07-15 20:24:40.132753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.848 [2024-07-15 20:24:40.132851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.848 [2024-07-15 20:24:40.132879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.848 [2024-07-15 20:24:40.132888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.848 [2024-07-15 20:24:40.132895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.848 [2024-07-15 20:24:40.132916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-07-15 20:24:40.142835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.848 [2024-07-15 20:24:40.142937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.848 [2024-07-15 20:24:40.142963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.848 [2024-07-15 20:24:40.142972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.848 [2024-07-15 20:24:40.142978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.848 [2024-07-15 20:24:40.143000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-07-15 20:24:40.152835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.848 [2024-07-15 20:24:40.152940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.848 [2024-07-15 20:24:40.152967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.848 [2024-07-15 20:24:40.152976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.848 [2024-07-15 20:24:40.152983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.848 [2024-07-15 20:24:40.153003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-07-15 20:24:40.162859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.848 [2024-07-15 20:24:40.162950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.848 [2024-07-15 20:24:40.162977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.848 [2024-07-15 20:24:40.162985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.848 [2024-07-15 20:24:40.163001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.848 [2024-07-15 20:24:40.163025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-07-15 20:24:40.172862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.848 [2024-07-15 20:24:40.172956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.848 [2024-07-15 20:24:40.172983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.848 [2024-07-15 20:24:40.172994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.848 [2024-07-15 20:24:40.173000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.848 [2024-07-15 20:24:40.173021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-07-15 20:24:40.182942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.848 [2024-07-15 20:24:40.183039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.848 [2024-07-15 20:24:40.183066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.848 [2024-07-15 20:24:40.183075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.848 [2024-07-15 20:24:40.183082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.848 [2024-07-15 20:24:40.183103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-07-15 20:24:40.192938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.848 [2024-07-15 20:24:40.193044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.848 [2024-07-15 20:24:40.193071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.848 [2024-07-15 20:24:40.193079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.848 [2024-07-15 20:24:40.193086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.848 [2024-07-15 20:24:40.193107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-07-15 20:24:40.202971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.849 [2024-07-15 20:24:40.203062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.849 [2024-07-15 20:24:40.203088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.849 [2024-07-15 20:24:40.203096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.849 [2024-07-15 20:24:40.203103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.849 [2024-07-15 20:24:40.203130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-07-15 20:24:40.212992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.849 [2024-07-15 20:24:40.213088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.849 [2024-07-15 20:24:40.213116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.849 [2024-07-15 20:24:40.213132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.849 [2024-07-15 20:24:40.213139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.849 [2024-07-15 20:24:40.213160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-07-15 20:24:40.223050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.849 [2024-07-15 20:24:40.223196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.849 [2024-07-15 20:24:40.223237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.849 [2024-07-15 20:24:40.223245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.849 [2024-07-15 20:24:40.223252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.849 [2024-07-15 20:24:40.223274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-07-15 20:24:40.233017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.849 [2024-07-15 20:24:40.233137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.849 [2024-07-15 20:24:40.233163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.849 [2024-07-15 20:24:40.233172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.849 [2024-07-15 20:24:40.233179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.849 [2024-07-15 20:24:40.233201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-07-15 20:24:40.243133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.849 [2024-07-15 20:24:40.243269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.849 [2024-07-15 20:24:40.243296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.849 [2024-07-15 20:24:40.243305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.849 [2024-07-15 20:24:40.243312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.849 [2024-07-15 20:24:40.243333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-07-15 20:24:40.253070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.849 [2024-07-15 20:24:40.253177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.849 [2024-07-15 20:24:40.253204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.849 [2024-07-15 20:24:40.253219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.849 [2024-07-15 20:24:40.253226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.849 [2024-07-15 20:24:40.253248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-07-15 20:24:40.263061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.849 [2024-07-15 20:24:40.263163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.849 [2024-07-15 20:24:40.263191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.849 [2024-07-15 20:24:40.263199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.849 [2024-07-15 20:24:40.263205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.849 [2024-07-15 20:24:40.263226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-07-15 20:24:40.273217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.849 [2024-07-15 20:24:40.273328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.849 [2024-07-15 20:24:40.273358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.849 [2024-07-15 20:24:40.273367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.849 [2024-07-15 20:24:40.273373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:42.849 [2024-07-15 20:24:40.273394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.849 qpair failed and we were unable to recover it. 00:29:43.111 [2024-07-15 20:24:40.283112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.111 [2024-07-15 20:24:40.283210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.111 [2024-07-15 20:24:40.283238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.111 [2024-07-15 20:24:40.283246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.111 [2024-07-15 20:24:40.283253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.111 [2024-07-15 20:24:40.283274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.111 qpair failed and we were unable to recover it. 00:29:43.111 [2024-07-15 20:24:40.293268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.111 [2024-07-15 20:24:40.293376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.111 [2024-07-15 20:24:40.293403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.111 [2024-07-15 20:24:40.293411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.111 [2024-07-15 20:24:40.293418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.111 [2024-07-15 20:24:40.293438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.111 qpair failed and we were unable to recover it. 00:29:43.111 [2024-07-15 20:24:40.303275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.111 [2024-07-15 20:24:40.303421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.111 [2024-07-15 20:24:40.303448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.111 [2024-07-15 20:24:40.303457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.111 [2024-07-15 20:24:40.303463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.111 [2024-07-15 20:24:40.303483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.111 qpair failed and we were unable to recover it. 00:29:43.111 [2024-07-15 20:24:40.313339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.111 [2024-07-15 20:24:40.313490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.111 [2024-07-15 20:24:40.313517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.111 [2024-07-15 20:24:40.313525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.111 [2024-07-15 20:24:40.313531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.111 [2024-07-15 20:24:40.313550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.111 qpair failed and we were unable to recover it. 00:29:43.112 [2024-07-15 20:24:40.323232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.112 [2024-07-15 20:24:40.323334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.112 [2024-07-15 20:24:40.323360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.112 [2024-07-15 20:24:40.323369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.112 [2024-07-15 20:24:40.323375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.112 [2024-07-15 20:24:40.323395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.112 qpair failed and we were unable to recover it. 00:29:43.112 [2024-07-15 20:24:40.333290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.112 [2024-07-15 20:24:40.333391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.112 [2024-07-15 20:24:40.333418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.112 [2024-07-15 20:24:40.333427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.112 [2024-07-15 20:24:40.333433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.112 [2024-07-15 20:24:40.333454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.112 qpair failed and we were unable to recover it. 00:29:43.112 [2024-07-15 20:24:40.343410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.112 [2024-07-15 20:24:40.343522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.112 [2024-07-15 20:24:40.343554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.112 [2024-07-15 20:24:40.343563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.112 [2024-07-15 20:24:40.343569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.112 [2024-07-15 20:24:40.343588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.112 qpair failed and we were unable to recover it. 00:29:43.112 [2024-07-15 20:24:40.353443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.112 [2024-07-15 20:24:40.353574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.112 [2024-07-15 20:24:40.353601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.112 [2024-07-15 20:24:40.353609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.112 [2024-07-15 20:24:40.353615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.112 [2024-07-15 20:24:40.353634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.112 qpair failed and we were unable to recover it. 00:29:43.112 [2024-07-15 20:24:40.363430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.112 [2024-07-15 20:24:40.363572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.112 [2024-07-15 20:24:40.363599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.112 [2024-07-15 20:24:40.363607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.112 [2024-07-15 20:24:40.363613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.112 [2024-07-15 20:24:40.363633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.112 qpair failed and we were unable to recover it. 00:29:43.112 [2024-07-15 20:24:40.373478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.112 [2024-07-15 20:24:40.373568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.112 [2024-07-15 20:24:40.373594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.112 [2024-07-15 20:24:40.373602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.112 [2024-07-15 20:24:40.373609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.112 [2024-07-15 20:24:40.373628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.112 qpair failed and we were unable to recover it. 00:29:43.112 [2024-07-15 20:24:40.383540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.112 [2024-07-15 20:24:40.383644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.112 [2024-07-15 20:24:40.383671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.112 [2024-07-15 20:24:40.383679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.112 [2024-07-15 20:24:40.383685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.112 [2024-07-15 20:24:40.383712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.112 qpair failed and we were unable to recover it. 00:29:43.112 [2024-07-15 20:24:40.393585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.112 [2024-07-15 20:24:40.393699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.112 [2024-07-15 20:24:40.393738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.112 [2024-07-15 20:24:40.393749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.112 [2024-07-15 20:24:40.393756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.112 [2024-07-15 20:24:40.393781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.112 qpair failed and we were unable to recover it. 00:29:43.112 [2024-07-15 20:24:40.403617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.112 [2024-07-15 20:24:40.403713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.112 [2024-07-15 20:24:40.403753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.112 [2024-07-15 20:24:40.403763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.112 [2024-07-15 20:24:40.403769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.112 [2024-07-15 20:24:40.403795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.112 qpair failed and we were unable to recover it. 00:29:43.112 [2024-07-15 20:24:40.413501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.112 [2024-07-15 20:24:40.413598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.112 [2024-07-15 20:24:40.413628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.112 [2024-07-15 20:24:40.413638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.112 [2024-07-15 20:24:40.413647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.112 [2024-07-15 20:24:40.413670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.112 qpair failed and we were unable to recover it. 00:29:43.112 [2024-07-15 20:24:40.423669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.112 [2024-07-15 20:24:40.423766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.112 [2024-07-15 20:24:40.423794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.112 [2024-07-15 20:24:40.423803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.112 [2024-07-15 20:24:40.423810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.112 [2024-07-15 20:24:40.423831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.112 qpair failed and we were unable to recover it. 00:29:43.112 [2024-07-15 20:24:40.433700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.112 [2024-07-15 20:24:40.433811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.112 [2024-07-15 20:24:40.433847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.112 [2024-07-15 20:24:40.433856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.112 [2024-07-15 20:24:40.433862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.112 [2024-07-15 20:24:40.433883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.112 qpair failed and we were unable to recover it. 00:29:43.112 [2024-07-15 20:24:40.443576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.112 [2024-07-15 20:24:40.443663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.112 [2024-07-15 20:24:40.443692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.112 [2024-07-15 20:24:40.443701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.112 [2024-07-15 20:24:40.443707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.112 [2024-07-15 20:24:40.443729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.112 qpair failed and we were unable to recover it. 00:29:43.112 [2024-07-15 20:24:40.453701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.112 [2024-07-15 20:24:40.453810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.112 [2024-07-15 20:24:40.453837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.112 [2024-07-15 20:24:40.453846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.112 [2024-07-15 20:24:40.453852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.112 [2024-07-15 20:24:40.453873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.112 qpair failed and we were unable to recover it. 00:29:43.112 [2024-07-15 20:24:40.463831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.113 [2024-07-15 20:24:40.463926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.113 [2024-07-15 20:24:40.463952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.113 [2024-07-15 20:24:40.463960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.113 [2024-07-15 20:24:40.463967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.113 [2024-07-15 20:24:40.463987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.113 qpair failed and we were unable to recover it. 00:29:43.113 [2024-07-15 20:24:40.473843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.113 [2024-07-15 20:24:40.473957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.113 [2024-07-15 20:24:40.473983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.113 [2024-07-15 20:24:40.473991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.113 [2024-07-15 20:24:40.473997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.113 [2024-07-15 20:24:40.474024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.113 qpair failed and we were unable to recover it. 00:29:43.113 [2024-07-15 20:24:40.483832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.113 [2024-07-15 20:24:40.483948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.113 [2024-07-15 20:24:40.483976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.113 [2024-07-15 20:24:40.483984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.113 [2024-07-15 20:24:40.483991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.113 [2024-07-15 20:24:40.484012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.113 qpair failed and we were unable to recover it. 00:29:43.113 [2024-07-15 20:24:40.493848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.113 [2024-07-15 20:24:40.493947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.113 [2024-07-15 20:24:40.493973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.113 [2024-07-15 20:24:40.493982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.113 [2024-07-15 20:24:40.493988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.113 [2024-07-15 20:24:40.494009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.113 qpair failed and we were unable to recover it. 00:29:43.113 [2024-07-15 20:24:40.503906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.113 [2024-07-15 20:24:40.504024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.113 [2024-07-15 20:24:40.504050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.113 [2024-07-15 20:24:40.504058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.113 [2024-07-15 20:24:40.504064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.113 [2024-07-15 20:24:40.504084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.113 qpair failed and we were unable to recover it. 00:29:43.113 [2024-07-15 20:24:40.513831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.113 [2024-07-15 20:24:40.513943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.113 [2024-07-15 20:24:40.513970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.113 [2024-07-15 20:24:40.513978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.113 [2024-07-15 20:24:40.513984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.113 [2024-07-15 20:24:40.514004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.113 qpair failed and we were unable to recover it. 00:29:43.113 [2024-07-15 20:24:40.523927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.113 [2024-07-15 20:24:40.524135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.113 [2024-07-15 20:24:40.524161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.113 [2024-07-15 20:24:40.524169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.113 [2024-07-15 20:24:40.524176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.113 [2024-07-15 20:24:40.524197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.113 qpair failed and we were unable to recover it. 00:29:43.113 [2024-07-15 20:24:40.533970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.113 [2024-07-15 20:24:40.534069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.113 [2024-07-15 20:24:40.534095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.113 [2024-07-15 20:24:40.534104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.113 [2024-07-15 20:24:40.534110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.113 [2024-07-15 20:24:40.534135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.113 qpair failed and we were unable to recover it. 00:29:43.376 [2024-07-15 20:24:40.544153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.376 [2024-07-15 20:24:40.544265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.376 [2024-07-15 20:24:40.544291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.376 [2024-07-15 20:24:40.544300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.376 [2024-07-15 20:24:40.544307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.376 [2024-07-15 20:24:40.544328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.376 qpair failed and we were unable to recover it. 00:29:43.376 [2024-07-15 20:24:40.553940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.376 [2024-07-15 20:24:40.554049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.376 [2024-07-15 20:24:40.554076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.376 [2024-07-15 20:24:40.554084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.376 [2024-07-15 20:24:40.554090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.376 [2024-07-15 20:24:40.554110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.376 qpair failed and we were unable to recover it. 00:29:43.376 [2024-07-15 20:24:40.564076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.376 [2024-07-15 20:24:40.564172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.376 [2024-07-15 20:24:40.564199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.376 [2024-07-15 20:24:40.564207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.376 [2024-07-15 20:24:40.564221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.376 [2024-07-15 20:24:40.564241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.376 qpair failed and we were unable to recover it. 00:29:43.376 [2024-07-15 20:24:40.574030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.376 [2024-07-15 20:24:40.574148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.376 [2024-07-15 20:24:40.574175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.376 [2024-07-15 20:24:40.574183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.376 [2024-07-15 20:24:40.574189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.376 [2024-07-15 20:24:40.574210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.376 qpair failed and we were unable to recover it. 00:29:43.376 [2024-07-15 20:24:40.584052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.376 [2024-07-15 20:24:40.584152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.376 [2024-07-15 20:24:40.584181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.376 [2024-07-15 20:24:40.584189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.377 [2024-07-15 20:24:40.584195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.377 [2024-07-15 20:24:40.584217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.377 qpair failed and we were unable to recover it. 00:29:43.377 [2024-07-15 20:24:40.594165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.377 [2024-07-15 20:24:40.594275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.377 [2024-07-15 20:24:40.594303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.377 [2024-07-15 20:24:40.594312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.377 [2024-07-15 20:24:40.594318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.377 [2024-07-15 20:24:40.594339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.377 qpair failed and we were unable to recover it. 00:29:43.377 [2024-07-15 20:24:40.604193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.377 [2024-07-15 20:24:40.604300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.377 [2024-07-15 20:24:40.604326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.377 [2024-07-15 20:24:40.604335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.377 [2024-07-15 20:24:40.604342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.377 [2024-07-15 20:24:40.604362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.377 qpair failed and we were unable to recover it. 00:29:43.377 [2024-07-15 20:24:40.614261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.377 [2024-07-15 20:24:40.614364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.377 [2024-07-15 20:24:40.614391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.377 [2024-07-15 20:24:40.614399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.377 [2024-07-15 20:24:40.614405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.377 [2024-07-15 20:24:40.614425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.377 qpair failed and we were unable to recover it. 00:29:43.377 [2024-07-15 20:24:40.624281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.377 [2024-07-15 20:24:40.624386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.377 [2024-07-15 20:24:40.624412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.377 [2024-07-15 20:24:40.624420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.377 [2024-07-15 20:24:40.624426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.377 [2024-07-15 20:24:40.624447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.377 qpair failed and we were unable to recover it. 00:29:43.377 [2024-07-15 20:24:40.634327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.377 [2024-07-15 20:24:40.634445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.377 [2024-07-15 20:24:40.634472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.377 [2024-07-15 20:24:40.634480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.377 [2024-07-15 20:24:40.634486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.377 [2024-07-15 20:24:40.634507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.377 qpair failed and we were unable to recover it. 00:29:43.377 [2024-07-15 20:24:40.644211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.377 [2024-07-15 20:24:40.644312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.377 [2024-07-15 20:24:40.644342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.377 [2024-07-15 20:24:40.644351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.377 [2024-07-15 20:24:40.644357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.377 [2024-07-15 20:24:40.644379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.377 qpair failed and we were unable to recover it. 00:29:43.377 [2024-07-15 20:24:40.654366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.377 [2024-07-15 20:24:40.654471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.377 [2024-07-15 20:24:40.654498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.377 [2024-07-15 20:24:40.654513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.377 [2024-07-15 20:24:40.654520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.377 [2024-07-15 20:24:40.654540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.377 qpair failed and we were unable to recover it. 00:29:43.377 [2024-07-15 20:24:40.664410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.377 [2024-07-15 20:24:40.664520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.377 [2024-07-15 20:24:40.664546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.377 [2024-07-15 20:24:40.664556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.377 [2024-07-15 20:24:40.664567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.377 [2024-07-15 20:24:40.664590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.377 qpair failed and we were unable to recover it. 00:29:43.377 [2024-07-15 20:24:40.674473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.377 [2024-07-15 20:24:40.674587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.377 [2024-07-15 20:24:40.674614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.377 [2024-07-15 20:24:40.674621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.377 [2024-07-15 20:24:40.674629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.377 [2024-07-15 20:24:40.674650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.377 qpair failed and we were unable to recover it. 00:29:43.377 [2024-07-15 20:24:40.684436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.377 [2024-07-15 20:24:40.684536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.377 [2024-07-15 20:24:40.684563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.377 [2024-07-15 20:24:40.684570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.377 [2024-07-15 20:24:40.684577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.377 [2024-07-15 20:24:40.684596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.377 qpair failed and we were unable to recover it. 00:29:43.377 [2024-07-15 20:24:40.694476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.377 [2024-07-15 20:24:40.694575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.377 [2024-07-15 20:24:40.694601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.377 [2024-07-15 20:24:40.694610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.377 [2024-07-15 20:24:40.694616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.377 [2024-07-15 20:24:40.694639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.377 qpair failed and we were unable to recover it. 00:29:43.377 [2024-07-15 20:24:40.704461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.377 [2024-07-15 20:24:40.704554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.377 [2024-07-15 20:24:40.704580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.377 [2024-07-15 20:24:40.704587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.377 [2024-07-15 20:24:40.704594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.377 [2024-07-15 20:24:40.704614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.377 qpair failed and we were unable to recover it. 00:29:43.377 [2024-07-15 20:24:40.714515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.377 [2024-07-15 20:24:40.714618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.377 [2024-07-15 20:24:40.714644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.377 [2024-07-15 20:24:40.714652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.377 [2024-07-15 20:24:40.714659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.377 [2024-07-15 20:24:40.714679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.377 qpair failed and we were unable to recover it. 00:29:43.377 [2024-07-15 20:24:40.724555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.377 [2024-07-15 20:24:40.724667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.377 [2024-07-15 20:24:40.724693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.377 [2024-07-15 20:24:40.724701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.378 [2024-07-15 20:24:40.724708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.378 [2024-07-15 20:24:40.724727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.378 qpair failed and we were unable to recover it. 00:29:43.378 [2024-07-15 20:24:40.734579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.378 [2024-07-15 20:24:40.734680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.378 [2024-07-15 20:24:40.734706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.378 [2024-07-15 20:24:40.734715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.378 [2024-07-15 20:24:40.734721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.378 [2024-07-15 20:24:40.734741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.378 qpair failed and we were unable to recover it. 00:29:43.378 [2024-07-15 20:24:40.744756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.378 [2024-07-15 20:24:40.744870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.378 [2024-07-15 20:24:40.744910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.378 [2024-07-15 20:24:40.744928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.378 [2024-07-15 20:24:40.744935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.378 [2024-07-15 20:24:40.744963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.378 qpair failed and we were unable to recover it. 00:29:43.378 [2024-07-15 20:24:40.754632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.378 [2024-07-15 20:24:40.754748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.378 [2024-07-15 20:24:40.754788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.378 [2024-07-15 20:24:40.754798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.378 [2024-07-15 20:24:40.754805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.378 [2024-07-15 20:24:40.754830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.378 qpair failed and we were unable to recover it. 00:29:43.378 [2024-07-15 20:24:40.764676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.378 [2024-07-15 20:24:40.764783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.378 [2024-07-15 20:24:40.764823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.378 [2024-07-15 20:24:40.764833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.378 [2024-07-15 20:24:40.764840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.378 [2024-07-15 20:24:40.764865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.378 qpair failed and we were unable to recover it. 00:29:43.378 [2024-07-15 20:24:40.774585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.378 [2024-07-15 20:24:40.774687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.378 [2024-07-15 20:24:40.774718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.378 [2024-07-15 20:24:40.774727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.378 [2024-07-15 20:24:40.774733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.378 [2024-07-15 20:24:40.774756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.378 qpair failed and we were unable to recover it. 00:29:43.378 [2024-07-15 20:24:40.784736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.378 [2024-07-15 20:24:40.784838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.378 [2024-07-15 20:24:40.784865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.378 [2024-07-15 20:24:40.784873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.378 [2024-07-15 20:24:40.784880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.378 [2024-07-15 20:24:40.784901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.378 qpair failed and we were unable to recover it. 00:29:43.378 [2024-07-15 20:24:40.794812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.378 [2024-07-15 20:24:40.794921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.378 [2024-07-15 20:24:40.794961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.378 [2024-07-15 20:24:40.794971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.378 [2024-07-15 20:24:40.794977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.378 [2024-07-15 20:24:40.795001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.378 qpair failed and we were unable to recover it. 00:29:43.378 [2024-07-15 20:24:40.804770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.378 [2024-07-15 20:24:40.804891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.378 [2024-07-15 20:24:40.804920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.378 [2024-07-15 20:24:40.804928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.378 [2024-07-15 20:24:40.804934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.378 [2024-07-15 20:24:40.804956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.378 qpair failed and we were unable to recover it. 00:29:43.641 [2024-07-15 20:24:40.814790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.641 [2024-07-15 20:24:40.814891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.641 [2024-07-15 20:24:40.814918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.641 [2024-07-15 20:24:40.814927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.641 [2024-07-15 20:24:40.814933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.641 [2024-07-15 20:24:40.814954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.641 qpair failed and we were unable to recover it. 00:29:43.641 [2024-07-15 20:24:40.824851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.641 [2024-07-15 20:24:40.824955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.641 [2024-07-15 20:24:40.824982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.641 [2024-07-15 20:24:40.824990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.641 [2024-07-15 20:24:40.824998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.641 [2024-07-15 20:24:40.825019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.641 qpair failed and we were unable to recover it. 00:29:43.641 [2024-07-15 20:24:40.834933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.641 [2024-07-15 20:24:40.835071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.641 [2024-07-15 20:24:40.835105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.641 [2024-07-15 20:24:40.835114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.641 [2024-07-15 20:24:40.835120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.641 [2024-07-15 20:24:40.835148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.641 qpair failed and we were unable to recover it. 00:29:43.641 [2024-07-15 20:24:40.844861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.641 [2024-07-15 20:24:40.844967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.641 [2024-07-15 20:24:40.844995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.641 [2024-07-15 20:24:40.845003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.641 [2024-07-15 20:24:40.845009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.641 [2024-07-15 20:24:40.845029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.641 qpair failed and we were unable to recover it. 00:29:43.641 [2024-07-15 20:24:40.854942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.641 [2024-07-15 20:24:40.855046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.641 [2024-07-15 20:24:40.855074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.641 [2024-07-15 20:24:40.855082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.641 [2024-07-15 20:24:40.855088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.641 [2024-07-15 20:24:40.855109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.641 qpair failed and we were unable to recover it. 00:29:43.641 [2024-07-15 20:24:40.864978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.641 [2024-07-15 20:24:40.865072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.641 [2024-07-15 20:24:40.865098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.641 [2024-07-15 20:24:40.865106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.641 [2024-07-15 20:24:40.865112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.641 [2024-07-15 20:24:40.865139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.641 qpair failed and we were unable to recover it. 00:29:43.641 [2024-07-15 20:24:40.874979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.641 [2024-07-15 20:24:40.875087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.641 [2024-07-15 20:24:40.875113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.641 [2024-07-15 20:24:40.875127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.641 [2024-07-15 20:24:40.875134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.641 [2024-07-15 20:24:40.875161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.641 qpair failed and we were unable to recover it. 00:29:43.641 [2024-07-15 20:24:40.885032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.641 [2024-07-15 20:24:40.885133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.642 [2024-07-15 20:24:40.885160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.642 [2024-07-15 20:24:40.885168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.642 [2024-07-15 20:24:40.885175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.642 [2024-07-15 20:24:40.885196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.642 qpair failed and we were unable to recover it. 00:29:43.642 [2024-07-15 20:24:40.895046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.642 [2024-07-15 20:24:40.895166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.642 [2024-07-15 20:24:40.895193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.642 [2024-07-15 20:24:40.895201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.642 [2024-07-15 20:24:40.895207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.642 [2024-07-15 20:24:40.895228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.642 qpair failed and we were unable to recover it. 00:29:43.642 [2024-07-15 20:24:40.905092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.642 [2024-07-15 20:24:40.905190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.642 [2024-07-15 20:24:40.905217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.642 [2024-07-15 20:24:40.905225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.642 [2024-07-15 20:24:40.905231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.642 [2024-07-15 20:24:40.905252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.642 qpair failed and we were unable to recover it. 00:29:43.642 [2024-07-15 20:24:40.915101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.642 [2024-07-15 20:24:40.915209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.642 [2024-07-15 20:24:40.915236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.642 [2024-07-15 20:24:40.915245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.642 [2024-07-15 20:24:40.915251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.642 [2024-07-15 20:24:40.915272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.642 qpair failed and we were unable to recover it. 00:29:43.642 [2024-07-15 20:24:40.925120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.642 [2024-07-15 20:24:40.925215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.642 [2024-07-15 20:24:40.925247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.642 [2024-07-15 20:24:40.925256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.642 [2024-07-15 20:24:40.925263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.642 [2024-07-15 20:24:40.925284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.642 qpair failed and we were unable to recover it. 00:29:43.642 [2024-07-15 20:24:40.935180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.642 [2024-07-15 20:24:40.935279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.642 [2024-07-15 20:24:40.935307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.642 [2024-07-15 20:24:40.935316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.642 [2024-07-15 20:24:40.935322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.642 [2024-07-15 20:24:40.935343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.642 qpair failed and we were unable to recover it. 00:29:43.642 [2024-07-15 20:24:40.945218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.642 [2024-07-15 20:24:40.945327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.642 [2024-07-15 20:24:40.945354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.642 [2024-07-15 20:24:40.945362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.642 [2024-07-15 20:24:40.945369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.642 [2024-07-15 20:24:40.945389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.642 qpair failed and we were unable to recover it. 00:29:43.642 [2024-07-15 20:24:40.955262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.642 [2024-07-15 20:24:40.955411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.642 [2024-07-15 20:24:40.955437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.642 [2024-07-15 20:24:40.955445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.642 [2024-07-15 20:24:40.955452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.642 [2024-07-15 20:24:40.955471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.642 qpair failed and we were unable to recover it. 00:29:43.642 [2024-07-15 20:24:40.965269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.642 [2024-07-15 20:24:40.965368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.642 [2024-07-15 20:24:40.965394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.642 [2024-07-15 20:24:40.965402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.642 [2024-07-15 20:24:40.965416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.642 [2024-07-15 20:24:40.965435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.642 qpair failed and we were unable to recover it. 00:29:43.642 [2024-07-15 20:24:40.975283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.642 [2024-07-15 20:24:40.975387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.642 [2024-07-15 20:24:40.975414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.642 [2024-07-15 20:24:40.975422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.642 [2024-07-15 20:24:40.975429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.642 [2024-07-15 20:24:40.975449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.642 qpair failed and we were unable to recover it. 00:29:43.642 [2024-07-15 20:24:40.985366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.642 [2024-07-15 20:24:40.985469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.642 [2024-07-15 20:24:40.985496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.642 [2024-07-15 20:24:40.985504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.642 [2024-07-15 20:24:40.985510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.642 [2024-07-15 20:24:40.985531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.642 qpair failed and we were unable to recover it. 00:29:43.642 [2024-07-15 20:24:40.995296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.642 [2024-07-15 20:24:40.995410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.642 [2024-07-15 20:24:40.995437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.642 [2024-07-15 20:24:40.995445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.642 [2024-07-15 20:24:40.995451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.642 [2024-07-15 20:24:40.995472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.642 qpair failed and we were unable to recover it. 00:29:43.642 [2024-07-15 20:24:41.005351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.642 [2024-07-15 20:24:41.005439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.642 [2024-07-15 20:24:41.005464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.642 [2024-07-15 20:24:41.005473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.642 [2024-07-15 20:24:41.005480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.642 [2024-07-15 20:24:41.005500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.642 qpair failed and we were unable to recover it. 00:29:43.642 [2024-07-15 20:24:41.015413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.642 [2024-07-15 20:24:41.015511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.642 [2024-07-15 20:24:41.015537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.642 [2024-07-15 20:24:41.015545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.642 [2024-07-15 20:24:41.015552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.642 [2024-07-15 20:24:41.015571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.642 qpair failed and we were unable to recover it. 00:29:43.642 [2024-07-15 20:24:41.025461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.643 [2024-07-15 20:24:41.025555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.643 [2024-07-15 20:24:41.025580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.643 [2024-07-15 20:24:41.025588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.643 [2024-07-15 20:24:41.025595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.643 [2024-07-15 20:24:41.025615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.643 qpair failed and we were unable to recover it. 00:29:43.643 [2024-07-15 20:24:41.035515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.643 [2024-07-15 20:24:41.035623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.643 [2024-07-15 20:24:41.035649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.643 [2024-07-15 20:24:41.035657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.643 [2024-07-15 20:24:41.035664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.643 [2024-07-15 20:24:41.035682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.643 qpair failed and we were unable to recover it. 00:29:43.643 [2024-07-15 20:24:41.045546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.643 [2024-07-15 20:24:41.045678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.643 [2024-07-15 20:24:41.045704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.643 [2024-07-15 20:24:41.045712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.643 [2024-07-15 20:24:41.045718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.643 [2024-07-15 20:24:41.045738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.643 qpair failed and we were unable to recover it. 00:29:43.643 [2024-07-15 20:24:41.055537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.643 [2024-07-15 20:24:41.055646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.643 [2024-07-15 20:24:41.055686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.643 [2024-07-15 20:24:41.055704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.643 [2024-07-15 20:24:41.055711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.643 [2024-07-15 20:24:41.055737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.643 qpair failed and we were unable to recover it. 00:29:43.643 [2024-07-15 20:24:41.065582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.643 [2024-07-15 20:24:41.065690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.643 [2024-07-15 20:24:41.065729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.643 [2024-07-15 20:24:41.065739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.643 [2024-07-15 20:24:41.065746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.643 [2024-07-15 20:24:41.065771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.643 qpair failed and we were unable to recover it. 00:29:43.906 [2024-07-15 20:24:41.075604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.906 [2024-07-15 20:24:41.075720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.906 [2024-07-15 20:24:41.075760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.906 [2024-07-15 20:24:41.075771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.906 [2024-07-15 20:24:41.075778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.906 [2024-07-15 20:24:41.075804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.906 qpair failed and we were unable to recover it. 00:29:43.906 [2024-07-15 20:24:41.085610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.906 [2024-07-15 20:24:41.085714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.906 [2024-07-15 20:24:41.085754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.906 [2024-07-15 20:24:41.085764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.906 [2024-07-15 20:24:41.085771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.906 [2024-07-15 20:24:41.085797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.906 qpair failed and we were unable to recover it. 00:29:43.906 [2024-07-15 20:24:41.095616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.906 [2024-07-15 20:24:41.095713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.906 [2024-07-15 20:24:41.095753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.907 [2024-07-15 20:24:41.095763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.907 [2024-07-15 20:24:41.095770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.907 [2024-07-15 20:24:41.095796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.907 qpair failed and we were unable to recover it. 00:29:43.907 [2024-07-15 20:24:41.105649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.907 [2024-07-15 20:24:41.105751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.907 [2024-07-15 20:24:41.105791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.907 [2024-07-15 20:24:41.105801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.907 [2024-07-15 20:24:41.105808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.907 [2024-07-15 20:24:41.105833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.907 qpair failed and we were unable to recover it. 00:29:43.907 [2024-07-15 20:24:41.115729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.907 [2024-07-15 20:24:41.115861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.907 [2024-07-15 20:24:41.115890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.907 [2024-07-15 20:24:41.115899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.907 [2024-07-15 20:24:41.115905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.907 [2024-07-15 20:24:41.115926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.907 qpair failed and we were unable to recover it. 00:29:43.907 [2024-07-15 20:24:41.125741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.907 [2024-07-15 20:24:41.125849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.907 [2024-07-15 20:24:41.125889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.907 [2024-07-15 20:24:41.125899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.907 [2024-07-15 20:24:41.125906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.907 [2024-07-15 20:24:41.125931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.907 qpair failed and we were unable to recover it. 00:29:43.907 [2024-07-15 20:24:41.135774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.907 [2024-07-15 20:24:41.135879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.907 [2024-07-15 20:24:41.135919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.907 [2024-07-15 20:24:41.135929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.907 [2024-07-15 20:24:41.135936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.907 [2024-07-15 20:24:41.135961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.907 qpair failed and we were unable to recover it. 00:29:43.907 [2024-07-15 20:24:41.145815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.907 [2024-07-15 20:24:41.145918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.907 [2024-07-15 20:24:41.145950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.907 [2024-07-15 20:24:41.145968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.907 [2024-07-15 20:24:41.145975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.907 [2024-07-15 20:24:41.145999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.907 qpair failed and we were unable to recover it. 00:29:43.907 [2024-07-15 20:24:41.155833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.907 [2024-07-15 20:24:41.155947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.907 [2024-07-15 20:24:41.155975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.907 [2024-07-15 20:24:41.155983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.907 [2024-07-15 20:24:41.155990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.907 [2024-07-15 20:24:41.156011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.907 qpair failed and we were unable to recover it. 00:29:43.907 [2024-07-15 20:24:41.165823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.907 [2024-07-15 20:24:41.165925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.907 [2024-07-15 20:24:41.165952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.907 [2024-07-15 20:24:41.165960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.907 [2024-07-15 20:24:41.165967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.907 [2024-07-15 20:24:41.165988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.907 qpair failed and we were unable to recover it. 00:29:43.907 [2024-07-15 20:24:41.175892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.907 [2024-07-15 20:24:41.175992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.907 [2024-07-15 20:24:41.176019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.907 [2024-07-15 20:24:41.176028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.907 [2024-07-15 20:24:41.176034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.907 [2024-07-15 20:24:41.176053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.907 qpair failed and we were unable to recover it. 00:29:43.907 [2024-07-15 20:24:41.185954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.907 [2024-07-15 20:24:41.186050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.907 [2024-07-15 20:24:41.186077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.907 [2024-07-15 20:24:41.186085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.907 [2024-07-15 20:24:41.186091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.907 [2024-07-15 20:24:41.186112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.907 qpair failed and we were unable to recover it. 00:29:43.907 [2024-07-15 20:24:41.195933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.907 [2024-07-15 20:24:41.196045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.907 [2024-07-15 20:24:41.196072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.907 [2024-07-15 20:24:41.196081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.907 [2024-07-15 20:24:41.196087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.907 [2024-07-15 20:24:41.196107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.907 qpair failed and we were unable to recover it. 00:29:43.907 [2024-07-15 20:24:41.205952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.907 [2024-07-15 20:24:41.206051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.908 [2024-07-15 20:24:41.206077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.908 [2024-07-15 20:24:41.206085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.908 [2024-07-15 20:24:41.206091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.908 [2024-07-15 20:24:41.206110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.908 qpair failed and we were unable to recover it. 00:29:43.908 [2024-07-15 20:24:41.216012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.908 [2024-07-15 20:24:41.216109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.908 [2024-07-15 20:24:41.216140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.908 [2024-07-15 20:24:41.216149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.908 [2024-07-15 20:24:41.216156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.908 [2024-07-15 20:24:41.216176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.908 qpair failed and we were unable to recover it. 00:29:43.908 [2024-07-15 20:24:41.226052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.908 [2024-07-15 20:24:41.226258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.908 [2024-07-15 20:24:41.226285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.908 [2024-07-15 20:24:41.226293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.908 [2024-07-15 20:24:41.226299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.908 [2024-07-15 20:24:41.226319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.908 qpair failed and we were unable to recover it. 00:29:43.908 [2024-07-15 20:24:41.236198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.908 [2024-07-15 20:24:41.236302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.908 [2024-07-15 20:24:41.236335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.908 [2024-07-15 20:24:41.236345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.908 [2024-07-15 20:24:41.236351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.908 [2024-07-15 20:24:41.236372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.908 qpair failed and we were unable to recover it. 00:29:43.908 [2024-07-15 20:24:41.246070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.908 [2024-07-15 20:24:41.246173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.908 [2024-07-15 20:24:41.246200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.908 [2024-07-15 20:24:41.246208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.908 [2024-07-15 20:24:41.246215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.908 [2024-07-15 20:24:41.246236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.908 qpair failed and we were unable to recover it. 00:29:43.908 [2024-07-15 20:24:41.256134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.908 [2024-07-15 20:24:41.256226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.908 [2024-07-15 20:24:41.256253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.908 [2024-07-15 20:24:41.256262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.908 [2024-07-15 20:24:41.256268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.908 [2024-07-15 20:24:41.256289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.908 qpair failed and we were unable to recover it. 00:29:43.908 [2024-07-15 20:24:41.266175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.908 [2024-07-15 20:24:41.266286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.908 [2024-07-15 20:24:41.266312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.908 [2024-07-15 20:24:41.266320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.908 [2024-07-15 20:24:41.266327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.908 [2024-07-15 20:24:41.266347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.908 qpair failed and we were unable to recover it. 00:29:43.908 [2024-07-15 20:24:41.276191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.908 [2024-07-15 20:24:41.276294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.908 [2024-07-15 20:24:41.276320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.908 [2024-07-15 20:24:41.276328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.908 [2024-07-15 20:24:41.276335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.908 [2024-07-15 20:24:41.276361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.908 qpair failed and we were unable to recover it. 00:29:43.908 [2024-07-15 20:24:41.286204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.908 [2024-07-15 20:24:41.286310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.908 [2024-07-15 20:24:41.286337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.908 [2024-07-15 20:24:41.286345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.908 [2024-07-15 20:24:41.286352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.908 [2024-07-15 20:24:41.286373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.908 qpair failed and we were unable to recover it. 00:29:43.908 [2024-07-15 20:24:41.296141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.908 [2024-07-15 20:24:41.296233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.908 [2024-07-15 20:24:41.296259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.908 [2024-07-15 20:24:41.296267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.908 [2024-07-15 20:24:41.296274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.908 [2024-07-15 20:24:41.296294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.908 qpair failed and we were unable to recover it. 00:29:43.908 [2024-07-15 20:24:41.306290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.908 [2024-07-15 20:24:41.306388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.908 [2024-07-15 20:24:41.306416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.908 [2024-07-15 20:24:41.306423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.908 [2024-07-15 20:24:41.306430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.908 [2024-07-15 20:24:41.306450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.908 qpair failed and we were unable to recover it. 00:29:43.908 [2024-07-15 20:24:41.316331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.908 [2024-07-15 20:24:41.316441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.908 [2024-07-15 20:24:41.316467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.908 [2024-07-15 20:24:41.316476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.908 [2024-07-15 20:24:41.316483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.908 [2024-07-15 20:24:41.316502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.908 qpair failed and we were unable to recover it. 00:29:43.908 [2024-07-15 20:24:41.326314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.908 [2024-07-15 20:24:41.326542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.908 [2024-07-15 20:24:41.326575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.908 [2024-07-15 20:24:41.326583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.908 [2024-07-15 20:24:41.326589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.908 [2024-07-15 20:24:41.326609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.908 qpair failed and we were unable to recover it. 00:29:43.908 [2024-07-15 20:24:41.336296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.908 [2024-07-15 20:24:41.336394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.908 [2024-07-15 20:24:41.336422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.908 [2024-07-15 20:24:41.336431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.908 [2024-07-15 20:24:41.336437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:43.909 [2024-07-15 20:24:41.336457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.909 qpair failed and we were unable to recover it. 00:29:44.172 [2024-07-15 20:24:41.346422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.172 [2024-07-15 20:24:41.346537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.172 [2024-07-15 20:24:41.346562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.172 [2024-07-15 20:24:41.346570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.172 [2024-07-15 20:24:41.346577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.172 [2024-07-15 20:24:41.346597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.172 qpair failed and we were unable to recover it. 00:29:44.172 [2024-07-15 20:24:41.356391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.172 [2024-07-15 20:24:41.356498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.172 [2024-07-15 20:24:41.356524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.172 [2024-07-15 20:24:41.356533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.172 [2024-07-15 20:24:41.356539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.172 [2024-07-15 20:24:41.356560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.172 qpair failed and we were unable to recover it. 00:29:44.172 [2024-07-15 20:24:41.366462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.172 [2024-07-15 20:24:41.366568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.172 [2024-07-15 20:24:41.366596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.172 [2024-07-15 20:24:41.366606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.172 [2024-07-15 20:24:41.366620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.172 [2024-07-15 20:24:41.366640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.172 qpair failed and we were unable to recover it. 00:29:44.172 [2024-07-15 20:24:41.376529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.172 [2024-07-15 20:24:41.376625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.172 [2024-07-15 20:24:41.376652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.172 [2024-07-15 20:24:41.376660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.172 [2024-07-15 20:24:41.376667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.172 [2024-07-15 20:24:41.376688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.172 qpair failed and we were unable to recover it. 00:29:44.172 [2024-07-15 20:24:41.386432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.172 [2024-07-15 20:24:41.386533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.172 [2024-07-15 20:24:41.386562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.172 [2024-07-15 20:24:41.386571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.172 [2024-07-15 20:24:41.386578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.172 [2024-07-15 20:24:41.386599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.172 qpair failed and we were unable to recover it. 00:29:44.172 [2024-07-15 20:24:41.396568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.172 [2024-07-15 20:24:41.396676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.172 [2024-07-15 20:24:41.396704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.172 [2024-07-15 20:24:41.396713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.172 [2024-07-15 20:24:41.396720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.172 [2024-07-15 20:24:41.396742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.172 qpair failed and we were unable to recover it. 00:29:44.172 [2024-07-15 20:24:41.406564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.172 [2024-07-15 20:24:41.406661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.172 [2024-07-15 20:24:41.406688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.172 [2024-07-15 20:24:41.406697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.172 [2024-07-15 20:24:41.406703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.172 [2024-07-15 20:24:41.406724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.172 qpair failed and we were unable to recover it. 00:29:44.172 [2024-07-15 20:24:41.416555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.173 [2024-07-15 20:24:41.416652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.173 [2024-07-15 20:24:41.416680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.173 [2024-07-15 20:24:41.416689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.173 [2024-07-15 20:24:41.416695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.173 [2024-07-15 20:24:41.416715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.173 qpair failed and we were unable to recover it. 00:29:44.173 [2024-07-15 20:24:41.426680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.173 [2024-07-15 20:24:41.426799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.173 [2024-07-15 20:24:41.426839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.173 [2024-07-15 20:24:41.426850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.173 [2024-07-15 20:24:41.426857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.173 [2024-07-15 20:24:41.426882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.173 qpair failed and we were unable to recover it. 00:29:44.173 [2024-07-15 20:24:41.436602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.173 [2024-07-15 20:24:41.436707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.173 [2024-07-15 20:24:41.436736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.173 [2024-07-15 20:24:41.436745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.173 [2024-07-15 20:24:41.436751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.173 [2024-07-15 20:24:41.436773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.173 qpair failed and we were unable to recover it. 00:29:44.173 [2024-07-15 20:24:41.446695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.173 [2024-07-15 20:24:41.446800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.173 [2024-07-15 20:24:41.446840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.173 [2024-07-15 20:24:41.446851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.173 [2024-07-15 20:24:41.446859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.173 [2024-07-15 20:24:41.446884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.173 qpair failed and we were unable to recover it. 00:29:44.173 [2024-07-15 20:24:41.456754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.173 [2024-07-15 20:24:41.456858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.173 [2024-07-15 20:24:41.456898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.173 [2024-07-15 20:24:41.456909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.173 [2024-07-15 20:24:41.456929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.173 [2024-07-15 20:24:41.456955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.173 qpair failed and we were unable to recover it. 00:29:44.173 [2024-07-15 20:24:41.466760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.173 [2024-07-15 20:24:41.466860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.173 [2024-07-15 20:24:41.466900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.173 [2024-07-15 20:24:41.466910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.173 [2024-07-15 20:24:41.466917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.173 [2024-07-15 20:24:41.466943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.173 qpair failed and we were unable to recover it. 00:29:44.173 [2024-07-15 20:24:41.476815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.173 [2024-07-15 20:24:41.476934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.173 [2024-07-15 20:24:41.476965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.173 [2024-07-15 20:24:41.476974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.173 [2024-07-15 20:24:41.476982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.173 [2024-07-15 20:24:41.477006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.173 qpair failed and we were unable to recover it. 00:29:44.173 [2024-07-15 20:24:41.486835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.173 [2024-07-15 20:24:41.486940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.173 [2024-07-15 20:24:41.486969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.173 [2024-07-15 20:24:41.486978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.173 [2024-07-15 20:24:41.486984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.173 [2024-07-15 20:24:41.487005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.173 qpair failed and we were unable to recover it. 00:29:44.173 [2024-07-15 20:24:41.496888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.173 [2024-07-15 20:24:41.497035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.173 [2024-07-15 20:24:41.497062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.173 [2024-07-15 20:24:41.497070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.173 [2024-07-15 20:24:41.497076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.173 [2024-07-15 20:24:41.497096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.173 qpair failed and we were unable to recover it. 00:29:44.173 [2024-07-15 20:24:41.506931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.173 [2024-07-15 20:24:41.507062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.173 [2024-07-15 20:24:41.507090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.173 [2024-07-15 20:24:41.507098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.173 [2024-07-15 20:24:41.507104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.173 [2024-07-15 20:24:41.507132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.173 qpair failed and we were unable to recover it. 00:29:44.173 [2024-07-15 20:24:41.516935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.173 [2024-07-15 20:24:41.517047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.173 [2024-07-15 20:24:41.517074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.173 [2024-07-15 20:24:41.517083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.173 [2024-07-15 20:24:41.517089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.173 [2024-07-15 20:24:41.517110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.173 qpair failed and we were unable to recover it. 00:29:44.173 [2024-07-15 20:24:41.526852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.173 [2024-07-15 20:24:41.526950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.173 [2024-07-15 20:24:41.526978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.173 [2024-07-15 20:24:41.526986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.173 [2024-07-15 20:24:41.526992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.173 [2024-07-15 20:24:41.527012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.173 qpair failed and we were unable to recover it. 00:29:44.173 [2024-07-15 20:24:41.536998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.173 [2024-07-15 20:24:41.537096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.173 [2024-07-15 20:24:41.537131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.173 [2024-07-15 20:24:41.537140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.173 [2024-07-15 20:24:41.537147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.173 [2024-07-15 20:24:41.537168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.173 qpair failed and we were unable to recover it. 00:29:44.173 [2024-07-15 20:24:41.547008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.173 [2024-07-15 20:24:41.547113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.173 [2024-07-15 20:24:41.547148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.173 [2024-07-15 20:24:41.547163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.173 [2024-07-15 20:24:41.547169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.173 [2024-07-15 20:24:41.547190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.173 qpair failed and we were unable to recover it. 00:29:44.173 [2024-07-15 20:24:41.557084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.173 [2024-07-15 20:24:41.557189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.173 [2024-07-15 20:24:41.557214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.174 [2024-07-15 20:24:41.557223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.174 [2024-07-15 20:24:41.557229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.174 [2024-07-15 20:24:41.557250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.174 qpair failed and we were unable to recover it. 00:29:44.174 [2024-07-15 20:24:41.567079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.174 [2024-07-15 20:24:41.567175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.174 [2024-07-15 20:24:41.567202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.174 [2024-07-15 20:24:41.567210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.174 [2024-07-15 20:24:41.567218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.174 [2024-07-15 20:24:41.567238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.174 qpair failed and we were unable to recover it. 00:29:44.174 [2024-07-15 20:24:41.577015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.174 [2024-07-15 20:24:41.577109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.174 [2024-07-15 20:24:41.577142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.174 [2024-07-15 20:24:41.577151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.174 [2024-07-15 20:24:41.577157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.174 [2024-07-15 20:24:41.577178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.174 qpair failed and we were unable to recover it. 00:29:44.174 [2024-07-15 20:24:41.587078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.174 [2024-07-15 20:24:41.587180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.174 [2024-07-15 20:24:41.587207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.174 [2024-07-15 20:24:41.587215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.174 [2024-07-15 20:24:41.587222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.174 [2024-07-15 20:24:41.587242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.174 qpair failed and we were unable to recover it. 00:29:44.174 [2024-07-15 20:24:41.597142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.174 [2024-07-15 20:24:41.597246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.174 [2024-07-15 20:24:41.597272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.174 [2024-07-15 20:24:41.597280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.174 [2024-07-15 20:24:41.597286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.174 [2024-07-15 20:24:41.597307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.174 qpair failed and we were unable to recover it. 00:29:44.437 [2024-07-15 20:24:41.607204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.437 [2024-07-15 20:24:41.607306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.437 [2024-07-15 20:24:41.607334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.437 [2024-07-15 20:24:41.607342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.437 [2024-07-15 20:24:41.607349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.437 [2024-07-15 20:24:41.607369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.437 qpair failed and we were unable to recover it. 00:29:44.437 [2024-07-15 20:24:41.617119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.437 [2024-07-15 20:24:41.617216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.437 [2024-07-15 20:24:41.617242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.437 [2024-07-15 20:24:41.617251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.437 [2024-07-15 20:24:41.617258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.437 [2024-07-15 20:24:41.617278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.437 qpair failed and we were unable to recover it. 00:29:44.437 [2024-07-15 20:24:41.627260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.438 [2024-07-15 20:24:41.627358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.438 [2024-07-15 20:24:41.627384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.438 [2024-07-15 20:24:41.627392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.438 [2024-07-15 20:24:41.627398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.438 [2024-07-15 20:24:41.627418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.438 qpair failed and we were unable to recover it. 00:29:44.438 [2024-07-15 20:24:41.637296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.438 [2024-07-15 20:24:41.637407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.438 [2024-07-15 20:24:41.637439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.438 [2024-07-15 20:24:41.637448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.438 [2024-07-15 20:24:41.637454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.438 [2024-07-15 20:24:41.637474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.438 qpair failed and we were unable to recover it. 00:29:44.438 [2024-07-15 20:24:41.647326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.438 [2024-07-15 20:24:41.647426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.438 [2024-07-15 20:24:41.647452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.438 [2024-07-15 20:24:41.647462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.438 [2024-07-15 20:24:41.647468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.438 [2024-07-15 20:24:41.647487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.438 qpair failed and we were unable to recover it. 00:29:44.438 [2024-07-15 20:24:41.657355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.438 [2024-07-15 20:24:41.657452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.438 [2024-07-15 20:24:41.657480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.438 [2024-07-15 20:24:41.657488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.438 [2024-07-15 20:24:41.657494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.438 [2024-07-15 20:24:41.657515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.438 qpair failed and we were unable to recover it. 00:29:44.438 [2024-07-15 20:24:41.667405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.438 [2024-07-15 20:24:41.667599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.438 [2024-07-15 20:24:41.667625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.438 [2024-07-15 20:24:41.667633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.438 [2024-07-15 20:24:41.667640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.438 [2024-07-15 20:24:41.667660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.438 qpair failed and we were unable to recover it. 00:29:44.438 [2024-07-15 20:24:41.677386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.438 [2024-07-15 20:24:41.677493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.438 [2024-07-15 20:24:41.677519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.438 [2024-07-15 20:24:41.677527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.438 [2024-07-15 20:24:41.677535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.438 [2024-07-15 20:24:41.677562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.438 qpair failed and we were unable to recover it. 00:29:44.438 [2024-07-15 20:24:41.687404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.438 [2024-07-15 20:24:41.687512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.438 [2024-07-15 20:24:41.687540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.438 [2024-07-15 20:24:41.687547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.438 [2024-07-15 20:24:41.687553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.438 [2024-07-15 20:24:41.687574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.438 qpair failed and we were unable to recover it. 00:29:44.438 [2024-07-15 20:24:41.697431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.438 [2024-07-15 20:24:41.697528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.438 [2024-07-15 20:24:41.697554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.438 [2024-07-15 20:24:41.697562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.438 [2024-07-15 20:24:41.697569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.438 [2024-07-15 20:24:41.697588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.438 qpair failed and we were unable to recover it. 00:29:44.438 [2024-07-15 20:24:41.707394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.438 [2024-07-15 20:24:41.707493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.438 [2024-07-15 20:24:41.707520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.438 [2024-07-15 20:24:41.707528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.438 [2024-07-15 20:24:41.707534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.438 [2024-07-15 20:24:41.707554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.438 qpair failed and we were unable to recover it. 00:29:44.438 [2024-07-15 20:24:41.717537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.438 [2024-07-15 20:24:41.717640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.438 [2024-07-15 20:24:41.717665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.438 [2024-07-15 20:24:41.717674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.438 [2024-07-15 20:24:41.717680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.438 [2024-07-15 20:24:41.717700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.438 qpair failed and we were unable to recover it. 00:29:44.438 [2024-07-15 20:24:41.727550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.438 [2024-07-15 20:24:41.727657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.438 [2024-07-15 20:24:41.727705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.438 [2024-07-15 20:24:41.727716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.438 [2024-07-15 20:24:41.727723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.438 [2024-07-15 20:24:41.727748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.438 qpair failed and we were unable to recover it. 00:29:44.438 [2024-07-15 20:24:41.737593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.438 [2024-07-15 20:24:41.737699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.439 [2024-07-15 20:24:41.737739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.439 [2024-07-15 20:24:41.737749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.439 [2024-07-15 20:24:41.737756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.439 [2024-07-15 20:24:41.737782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.439 qpair failed and we were unable to recover it. 00:29:44.439 [2024-07-15 20:24:41.747603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.439 [2024-07-15 20:24:41.747710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.439 [2024-07-15 20:24:41.747750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.439 [2024-07-15 20:24:41.747760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.439 [2024-07-15 20:24:41.747767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.439 [2024-07-15 20:24:41.747791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.439 qpair failed and we were unable to recover it. 00:29:44.439 [2024-07-15 20:24:41.757672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.439 [2024-07-15 20:24:41.757792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.439 [2024-07-15 20:24:41.757832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.439 [2024-07-15 20:24:41.757842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.439 [2024-07-15 20:24:41.757849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.439 [2024-07-15 20:24:41.757875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.439 qpair failed and we were unable to recover it. 00:29:44.439 [2024-07-15 20:24:41.767658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.439 [2024-07-15 20:24:41.767787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.439 [2024-07-15 20:24:41.767816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.439 [2024-07-15 20:24:41.767825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.439 [2024-07-15 20:24:41.767838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.439 [2024-07-15 20:24:41.767860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.439 qpair failed and we were unable to recover it. 00:29:44.439 [2024-07-15 20:24:41.777823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.439 [2024-07-15 20:24:41.777936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.439 [2024-07-15 20:24:41.777963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.439 [2024-07-15 20:24:41.777971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.439 [2024-07-15 20:24:41.777978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.439 [2024-07-15 20:24:41.777999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.439 qpair failed and we were unable to recover it. 00:29:44.439 [2024-07-15 20:24:41.787666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.439 [2024-07-15 20:24:41.787767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.439 [2024-07-15 20:24:41.787794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.439 [2024-07-15 20:24:41.787802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.439 [2024-07-15 20:24:41.787809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.439 [2024-07-15 20:24:41.787829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.439 qpair failed and we were unable to recover it. 00:29:44.439 [2024-07-15 20:24:41.797787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.439 [2024-07-15 20:24:41.797894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.439 [2024-07-15 20:24:41.797921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.439 [2024-07-15 20:24:41.797929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.439 [2024-07-15 20:24:41.797935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.439 [2024-07-15 20:24:41.797956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.439 qpair failed and we were unable to recover it. 00:29:44.439 [2024-07-15 20:24:41.807820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.439 [2024-07-15 20:24:41.807919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.439 [2024-07-15 20:24:41.807947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.439 [2024-07-15 20:24:41.807955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.439 [2024-07-15 20:24:41.807962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.439 [2024-07-15 20:24:41.807983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.439 qpair failed and we were unable to recover it. 00:29:44.439 [2024-07-15 20:24:41.817826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.439 [2024-07-15 20:24:41.817932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.439 [2024-07-15 20:24:41.817960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.439 [2024-07-15 20:24:41.817968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.439 [2024-07-15 20:24:41.817974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.439 [2024-07-15 20:24:41.817994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.439 qpair failed and we were unable to recover it. 00:29:44.439 [2024-07-15 20:24:41.827883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.439 [2024-07-15 20:24:41.827990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.439 [2024-07-15 20:24:41.828017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.439 [2024-07-15 20:24:41.828025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.439 [2024-07-15 20:24:41.828031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.439 [2024-07-15 20:24:41.828051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.439 qpair failed and we were unable to recover it. 00:29:44.439 [2024-07-15 20:24:41.837864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.439 [2024-07-15 20:24:41.837973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.439 [2024-07-15 20:24:41.838000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.439 [2024-07-15 20:24:41.838008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.439 [2024-07-15 20:24:41.838014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.439 [2024-07-15 20:24:41.838036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.439 qpair failed and we were unable to recover it. 00:29:44.439 [2024-07-15 20:24:41.847862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.439 [2024-07-15 20:24:41.847965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.439 [2024-07-15 20:24:41.847992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.440 [2024-07-15 20:24:41.848001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.440 [2024-07-15 20:24:41.848007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.440 [2024-07-15 20:24:41.848027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.440 qpair failed and we were unable to recover it. 00:29:44.440 [2024-07-15 20:24:41.857956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.440 [2024-07-15 20:24:41.858057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.440 [2024-07-15 20:24:41.858084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.440 [2024-07-15 20:24:41.858092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.440 [2024-07-15 20:24:41.858105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.440 [2024-07-15 20:24:41.858132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.440 qpair failed and we were unable to recover it. 00:29:44.703 [2024-07-15 20:24:41.867977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.703 [2024-07-15 20:24:41.868083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.703 [2024-07-15 20:24:41.868111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.703 [2024-07-15 20:24:41.868119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.703 [2024-07-15 20:24:41.868135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.703 [2024-07-15 20:24:41.868156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.703 qpair failed and we were unable to recover it. 00:29:44.703 [2024-07-15 20:24:41.878074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.703 [2024-07-15 20:24:41.878207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.703 [2024-07-15 20:24:41.878234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.703 [2024-07-15 20:24:41.878243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.703 [2024-07-15 20:24:41.878250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.703 [2024-07-15 20:24:41.878271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.703 qpair failed and we were unable to recover it. 00:29:44.703 [2024-07-15 20:24:41.888024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.703 [2024-07-15 20:24:41.888132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.703 [2024-07-15 20:24:41.888159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.703 [2024-07-15 20:24:41.888168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.703 [2024-07-15 20:24:41.888174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.703 [2024-07-15 20:24:41.888195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.703 qpair failed and we were unable to recover it. 00:29:44.703 [2024-07-15 20:24:41.898054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.703 [2024-07-15 20:24:41.898185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.703 [2024-07-15 20:24:41.898212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.703 [2024-07-15 20:24:41.898220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.703 [2024-07-15 20:24:41.898227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.703 [2024-07-15 20:24:41.898248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.703 qpair failed and we were unable to recover it. 00:29:44.703 [2024-07-15 20:24:41.908094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.703 [2024-07-15 20:24:41.908209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.703 [2024-07-15 20:24:41.908237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.704 [2024-07-15 20:24:41.908245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.704 [2024-07-15 20:24:41.908251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.704 [2024-07-15 20:24:41.908272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.704 qpair failed and we were unable to recover it. 00:29:44.704 [2024-07-15 20:24:41.918099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.704 [2024-07-15 20:24:41.918216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.704 [2024-07-15 20:24:41.918243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.704 [2024-07-15 20:24:41.918252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.704 [2024-07-15 20:24:41.918258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.704 [2024-07-15 20:24:41.918279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.704 qpair failed and we were unable to recover it. 00:29:44.704 [2024-07-15 20:24:41.928155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.704 [2024-07-15 20:24:41.928251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.704 [2024-07-15 20:24:41.928278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.704 [2024-07-15 20:24:41.928286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.704 [2024-07-15 20:24:41.928292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.704 [2024-07-15 20:24:41.928313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.704 qpair failed and we were unable to recover it. 00:29:44.704 [2024-07-15 20:24:41.938176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.704 [2024-07-15 20:24:41.938274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.704 [2024-07-15 20:24:41.938300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.704 [2024-07-15 20:24:41.938308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.704 [2024-07-15 20:24:41.938314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.704 [2024-07-15 20:24:41.938334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.704 qpair failed and we were unable to recover it. 00:29:44.704 [2024-07-15 20:24:41.948231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.704 [2024-07-15 20:24:41.948335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.704 [2024-07-15 20:24:41.948362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.704 [2024-07-15 20:24:41.948377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.704 [2024-07-15 20:24:41.948383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.704 [2024-07-15 20:24:41.948404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.704 qpair failed and we were unable to recover it. 00:29:44.704 [2024-07-15 20:24:41.958237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.704 [2024-07-15 20:24:41.958337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.704 [2024-07-15 20:24:41.958363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.704 [2024-07-15 20:24:41.958372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.704 [2024-07-15 20:24:41.958379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.704 [2024-07-15 20:24:41.958399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.704 qpair failed and we were unable to recover it. 00:29:44.704 [2024-07-15 20:24:41.968255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.704 [2024-07-15 20:24:41.968356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.704 [2024-07-15 20:24:41.968382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.704 [2024-07-15 20:24:41.968390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.704 [2024-07-15 20:24:41.968397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.704 [2024-07-15 20:24:41.968417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.704 qpair failed and we were unable to recover it. 00:29:44.704 [2024-07-15 20:24:41.978289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.704 [2024-07-15 20:24:41.978488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.704 [2024-07-15 20:24:41.978514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.704 [2024-07-15 20:24:41.978522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.704 [2024-07-15 20:24:41.978528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.704 [2024-07-15 20:24:41.978548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.704 qpair failed and we were unable to recover it. 00:29:44.704 [2024-07-15 20:24:41.988363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.704 [2024-07-15 20:24:41.988457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.704 [2024-07-15 20:24:41.988484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.704 [2024-07-15 20:24:41.988493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.704 [2024-07-15 20:24:41.988499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.704 [2024-07-15 20:24:41.988519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.704 qpair failed and we were unable to recover it. 00:29:44.704 [2024-07-15 20:24:41.998375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.704 [2024-07-15 20:24:41.998489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.704 [2024-07-15 20:24:41.998516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.704 [2024-07-15 20:24:41.998523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.704 [2024-07-15 20:24:41.998530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.704 [2024-07-15 20:24:41.998550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.704 qpair failed and we were unable to recover it. 00:29:44.704 [2024-07-15 20:24:42.008466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.704 [2024-07-15 20:24:42.008563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.704 [2024-07-15 20:24:42.008589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.704 [2024-07-15 20:24:42.008597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.704 [2024-07-15 20:24:42.008603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.704 [2024-07-15 20:24:42.008623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.704 qpair failed and we were unable to recover it. 00:29:44.704 [2024-07-15 20:24:42.018331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.704 [2024-07-15 20:24:42.018458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.704 [2024-07-15 20:24:42.018484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.704 [2024-07-15 20:24:42.018492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.704 [2024-07-15 20:24:42.018498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.704 [2024-07-15 20:24:42.018518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.704 qpair failed and we were unable to recover it. 00:29:44.704 [2024-07-15 20:24:42.028468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.704 [2024-07-15 20:24:42.028570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.704 [2024-07-15 20:24:42.028597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.704 [2024-07-15 20:24:42.028606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.704 [2024-07-15 20:24:42.028612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.704 [2024-07-15 20:24:42.028632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.705 qpair failed and we were unable to recover it. 00:29:44.705 [2024-07-15 20:24:42.038526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.705 [2024-07-15 20:24:42.038645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.705 [2024-07-15 20:24:42.038681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.705 [2024-07-15 20:24:42.038691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.705 [2024-07-15 20:24:42.038697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.705 [2024-07-15 20:24:42.038718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.705 qpair failed and we were unable to recover it. 00:29:44.705 [2024-07-15 20:24:42.048520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.705 [2024-07-15 20:24:42.048615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.705 [2024-07-15 20:24:42.048642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.705 [2024-07-15 20:24:42.048650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.705 [2024-07-15 20:24:42.048657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.705 [2024-07-15 20:24:42.048677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.705 qpair failed and we were unable to recover it. 00:29:44.705 [2024-07-15 20:24:42.058540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.705 [2024-07-15 20:24:42.058642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.705 [2024-07-15 20:24:42.058669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.705 [2024-07-15 20:24:42.058677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.705 [2024-07-15 20:24:42.058683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.705 [2024-07-15 20:24:42.058703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.705 qpair failed and we were unable to recover it. 00:29:44.705 [2024-07-15 20:24:42.068578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.705 [2024-07-15 20:24:42.068672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.705 [2024-07-15 20:24:42.068699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.705 [2024-07-15 20:24:42.068707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.705 [2024-07-15 20:24:42.068714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.705 [2024-07-15 20:24:42.068734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.705 qpair failed and we were unable to recover it. 00:29:44.705 [2024-07-15 20:24:42.078617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.705 [2024-07-15 20:24:42.078738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.705 [2024-07-15 20:24:42.078778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.705 [2024-07-15 20:24:42.078789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.705 [2024-07-15 20:24:42.078796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.705 [2024-07-15 20:24:42.078830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.705 qpair failed and we were unable to recover it. 00:29:44.705 [2024-07-15 20:24:42.088622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.705 [2024-07-15 20:24:42.088729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.705 [2024-07-15 20:24:42.088769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.705 [2024-07-15 20:24:42.088779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.705 [2024-07-15 20:24:42.088786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.705 [2024-07-15 20:24:42.088812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.705 qpair failed and we were unable to recover it. 00:29:44.705 [2024-07-15 20:24:42.098637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.705 [2024-07-15 20:24:42.098742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.705 [2024-07-15 20:24:42.098783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.705 [2024-07-15 20:24:42.098793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.705 [2024-07-15 20:24:42.098801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.705 [2024-07-15 20:24:42.098826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.705 qpair failed and we were unable to recover it. 00:29:44.705 [2024-07-15 20:24:42.108701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.705 [2024-07-15 20:24:42.108802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.705 [2024-07-15 20:24:42.108839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.705 [2024-07-15 20:24:42.108850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.705 [2024-07-15 20:24:42.108856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.705 [2024-07-15 20:24:42.108881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.705 qpair failed and we were unable to recover it. 00:29:44.705 [2024-07-15 20:24:42.118713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.705 [2024-07-15 20:24:42.118828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.705 [2024-07-15 20:24:42.118865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.705 [2024-07-15 20:24:42.118875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.705 [2024-07-15 20:24:42.118882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.705 [2024-07-15 20:24:42.118907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.705 qpair failed and we were unable to recover it. 00:29:44.705 [2024-07-15 20:24:42.128745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.705 [2024-07-15 20:24:42.128846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.705 [2024-07-15 20:24:42.128878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.705 [2024-07-15 20:24:42.128887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.705 [2024-07-15 20:24:42.128893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.705 [2024-07-15 20:24:42.128912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.705 qpair failed and we were unable to recover it. 00:29:44.968 [2024-07-15 20:24:42.138697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.968 [2024-07-15 20:24:42.138779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.968 [2024-07-15 20:24:42.138803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.968 [2024-07-15 20:24:42.138813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.968 [2024-07-15 20:24:42.138819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.968 [2024-07-15 20:24:42.138840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.968 qpair failed and we were unable to recover it. 00:29:44.968 [2024-07-15 20:24:42.148815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.968 [2024-07-15 20:24:42.148900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.968 [2024-07-15 20:24:42.148923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.968 [2024-07-15 20:24:42.148930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.968 [2024-07-15 20:24:42.148936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.968 [2024-07-15 20:24:42.148954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.968 qpair failed and we were unable to recover it. 00:29:44.968 [2024-07-15 20:24:42.158794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.968 [2024-07-15 20:24:42.158900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.968 [2024-07-15 20:24:42.158932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.968 [2024-07-15 20:24:42.158941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.968 [2024-07-15 20:24:42.158948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.968 [2024-07-15 20:24:42.158970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.968 qpair failed and we were unable to recover it. 00:29:44.968 [2024-07-15 20:24:42.168836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.968 [2024-07-15 20:24:42.168938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.968 [2024-07-15 20:24:42.168969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.968 [2024-07-15 20:24:42.168978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.968 [2024-07-15 20:24:42.168985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.968 [2024-07-15 20:24:42.169019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.968 qpair failed and we were unable to recover it. 00:29:44.968 [2024-07-15 20:24:42.178826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.968 [2024-07-15 20:24:42.178912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.968 [2024-07-15 20:24:42.178934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.968 [2024-07-15 20:24:42.178942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.969 [2024-07-15 20:24:42.178948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.969 [2024-07-15 20:24:42.178966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.969 qpair failed and we were unable to recover it. 00:29:44.969 [2024-07-15 20:24:42.188826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.969 [2024-07-15 20:24:42.188909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.969 [2024-07-15 20:24:42.188929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.969 [2024-07-15 20:24:42.188937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.969 [2024-07-15 20:24:42.188943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.969 [2024-07-15 20:24:42.188960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.969 qpair failed and we were unable to recover it. 00:29:44.969 [2024-07-15 20:24:42.198905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.969 [2024-07-15 20:24:42.199000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.969 [2024-07-15 20:24:42.199030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.969 [2024-07-15 20:24:42.199040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.969 [2024-07-15 20:24:42.199047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.969 [2024-07-15 20:24:42.199068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.969 qpair failed and we were unable to recover it. 00:29:44.969 [2024-07-15 20:24:42.208935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.969 [2024-07-15 20:24:42.209026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.969 [2024-07-15 20:24:42.209048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.969 [2024-07-15 20:24:42.209056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.969 [2024-07-15 20:24:42.209062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.969 [2024-07-15 20:24:42.209080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.969 qpair failed and we were unable to recover it. 00:29:44.969 [2024-07-15 20:24:42.218934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.969 [2024-07-15 20:24:42.219015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.969 [2024-07-15 20:24:42.219035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.969 [2024-07-15 20:24:42.219042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.969 [2024-07-15 20:24:42.219048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.969 [2024-07-15 20:24:42.219065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.969 qpair failed and we were unable to recover it. 00:29:44.969 [2024-07-15 20:24:42.228935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.969 [2024-07-15 20:24:42.229012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.969 [2024-07-15 20:24:42.229031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.969 [2024-07-15 20:24:42.229038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.969 [2024-07-15 20:24:42.229044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.969 [2024-07-15 20:24:42.229060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.969 qpair failed and we were unable to recover it. 00:29:44.969 [2024-07-15 20:24:42.239135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.969 [2024-07-15 20:24:42.239232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.969 [2024-07-15 20:24:42.239250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.969 [2024-07-15 20:24:42.239257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.969 [2024-07-15 20:24:42.239263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.969 [2024-07-15 20:24:42.239279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.969 qpair failed and we were unable to recover it. 00:29:44.969 [2024-07-15 20:24:42.249046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.969 [2024-07-15 20:24:42.249129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.969 [2024-07-15 20:24:42.249147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.969 [2024-07-15 20:24:42.249154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.969 [2024-07-15 20:24:42.249161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.969 [2024-07-15 20:24:42.249176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.969 qpair failed and we were unable to recover it. 00:29:44.969 [2024-07-15 20:24:42.259003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.969 [2024-07-15 20:24:42.259086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.969 [2024-07-15 20:24:42.259103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.969 [2024-07-15 20:24:42.259111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.969 [2024-07-15 20:24:42.259128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.969 [2024-07-15 20:24:42.259145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.969 qpair failed and we were unable to recover it. 00:29:44.969 [2024-07-15 20:24:42.269057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.969 [2024-07-15 20:24:42.269231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.969 [2024-07-15 20:24:42.269248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.969 [2024-07-15 20:24:42.269255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.969 [2024-07-15 20:24:42.269261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.969 [2024-07-15 20:24:42.269276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.969 qpair failed and we were unable to recover it. 00:29:44.969 [2024-07-15 20:24:42.279094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.969 [2024-07-15 20:24:42.279184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.969 [2024-07-15 20:24:42.279201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.969 [2024-07-15 20:24:42.279208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.969 [2024-07-15 20:24:42.279214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.969 [2024-07-15 20:24:42.279229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.969 qpair failed and we were unable to recover it. 00:29:44.969 [2024-07-15 20:24:42.289170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.969 [2024-07-15 20:24:42.289300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.969 [2024-07-15 20:24:42.289317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.969 [2024-07-15 20:24:42.289325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.969 [2024-07-15 20:24:42.289331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.969 [2024-07-15 20:24:42.289346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.969 qpair failed and we were unable to recover it. 00:29:44.969 [2024-07-15 20:24:42.299051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.969 [2024-07-15 20:24:42.299131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.969 [2024-07-15 20:24:42.299148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.969 [2024-07-15 20:24:42.299155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.969 [2024-07-15 20:24:42.299161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.969 [2024-07-15 20:24:42.299177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.969 qpair failed and we were unable to recover it. 00:29:44.969 [2024-07-15 20:24:42.309172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.969 [2024-07-15 20:24:42.309250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.970 [2024-07-15 20:24:42.309267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.970 [2024-07-15 20:24:42.309274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.970 [2024-07-15 20:24:42.309280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.970 [2024-07-15 20:24:42.309295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.970 qpair failed and we were unable to recover it. 00:29:44.970 [2024-07-15 20:24:42.319233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.970 [2024-07-15 20:24:42.319321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.970 [2024-07-15 20:24:42.319338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.970 [2024-07-15 20:24:42.319345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.970 [2024-07-15 20:24:42.319351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.970 [2024-07-15 20:24:42.319366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.970 qpair failed and we were unable to recover it. 00:29:44.970 [2024-07-15 20:24:42.329259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.970 [2024-07-15 20:24:42.329336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.970 [2024-07-15 20:24:42.329352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.970 [2024-07-15 20:24:42.329360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.970 [2024-07-15 20:24:42.329366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.970 [2024-07-15 20:24:42.329382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.970 qpair failed and we were unable to recover it. 00:29:44.970 [2024-07-15 20:24:42.339259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.970 [2024-07-15 20:24:42.339348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.970 [2024-07-15 20:24:42.339364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.970 [2024-07-15 20:24:42.339372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.970 [2024-07-15 20:24:42.339378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.970 [2024-07-15 20:24:42.339394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.970 qpair failed and we were unable to recover it. 00:29:44.970 [2024-07-15 20:24:42.349271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.970 [2024-07-15 20:24:42.349348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.970 [2024-07-15 20:24:42.349364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.970 [2024-07-15 20:24:42.349375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.970 [2024-07-15 20:24:42.349381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.970 [2024-07-15 20:24:42.349397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.970 qpair failed and we were unable to recover it. 00:29:44.970 [2024-07-15 20:24:42.359359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.970 [2024-07-15 20:24:42.359451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.970 [2024-07-15 20:24:42.359467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.970 [2024-07-15 20:24:42.359474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.970 [2024-07-15 20:24:42.359480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.970 [2024-07-15 20:24:42.359495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.970 qpair failed and we were unable to recover it. 00:29:44.970 [2024-07-15 20:24:42.369367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.970 [2024-07-15 20:24:42.369441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.970 [2024-07-15 20:24:42.369457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.970 [2024-07-15 20:24:42.369464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.970 [2024-07-15 20:24:42.369470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.970 [2024-07-15 20:24:42.369485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.970 qpair failed and we were unable to recover it. 00:29:44.970 [2024-07-15 20:24:42.379344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.970 [2024-07-15 20:24:42.379419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.970 [2024-07-15 20:24:42.379435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.970 [2024-07-15 20:24:42.379442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.970 [2024-07-15 20:24:42.379448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.970 [2024-07-15 20:24:42.379463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.970 qpair failed and we were unable to recover it. 00:29:44.970 [2024-07-15 20:24:42.389254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.970 [2024-07-15 20:24:42.389333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.970 [2024-07-15 20:24:42.389349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.970 [2024-07-15 20:24:42.389356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.970 [2024-07-15 20:24:42.389363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:44.970 [2024-07-15 20:24:42.389377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.970 qpair failed and we were unable to recover it. 00:29:45.233 [2024-07-15 20:24:42.399412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.233 [2024-07-15 20:24:42.399482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.233 [2024-07-15 20:24:42.399498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.233 [2024-07-15 20:24:42.399504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.233 [2024-07-15 20:24:42.399510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.233 [2024-07-15 20:24:42.399525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.233 qpair failed and we were unable to recover it. 00:29:45.233 [2024-07-15 20:24:42.409441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.233 [2024-07-15 20:24:42.409519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.233 [2024-07-15 20:24:42.409535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.233 [2024-07-15 20:24:42.409542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.233 [2024-07-15 20:24:42.409548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.233 [2024-07-15 20:24:42.409563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.233 qpair failed and we were unable to recover it. 00:29:45.233 [2024-07-15 20:24:42.419462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.233 [2024-07-15 20:24:42.419542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.233 [2024-07-15 20:24:42.419557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.233 [2024-07-15 20:24:42.419565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.233 [2024-07-15 20:24:42.419571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.233 [2024-07-15 20:24:42.419585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.233 qpair failed and we were unable to recover it. 00:29:45.233 [2024-07-15 20:24:42.429462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.233 [2024-07-15 20:24:42.429537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.233 [2024-07-15 20:24:42.429553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.233 [2024-07-15 20:24:42.429560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.233 [2024-07-15 20:24:42.429566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.233 [2024-07-15 20:24:42.429581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.233 qpair failed and we were unable to recover it. 00:29:45.233 [2024-07-15 20:24:42.439518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.233 [2024-07-15 20:24:42.439605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.233 [2024-07-15 20:24:42.439621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.233 [2024-07-15 20:24:42.439632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.233 [2024-07-15 20:24:42.439638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.233 [2024-07-15 20:24:42.439652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.233 qpair failed and we were unable to recover it. 00:29:45.233 [2024-07-15 20:24:42.449630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.233 [2024-07-15 20:24:42.449713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.233 [2024-07-15 20:24:42.449729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.233 [2024-07-15 20:24:42.449736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.233 [2024-07-15 20:24:42.449742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.233 [2024-07-15 20:24:42.449756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.233 qpair failed and we were unable to recover it. 00:29:45.233 [2024-07-15 20:24:42.459563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.233 [2024-07-15 20:24:42.459641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.233 [2024-07-15 20:24:42.459657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.233 [2024-07-15 20:24:42.459664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.233 [2024-07-15 20:24:42.459670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.233 [2024-07-15 20:24:42.459684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.233 qpair failed and we were unable to recover it. 00:29:45.233 [2024-07-15 20:24:42.469508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.233 [2024-07-15 20:24:42.469585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.233 [2024-07-15 20:24:42.469601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.233 [2024-07-15 20:24:42.469608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.233 [2024-07-15 20:24:42.469614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.233 [2024-07-15 20:24:42.469628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.233 qpair failed and we were unable to recover it. 00:29:45.233 [2024-07-15 20:24:42.479581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.233 [2024-07-15 20:24:42.479668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.233 [2024-07-15 20:24:42.479684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.233 [2024-07-15 20:24:42.479691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.233 [2024-07-15 20:24:42.479697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.233 [2024-07-15 20:24:42.479711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.233 qpair failed and we were unable to recover it. 00:29:45.233 [2024-07-15 20:24:42.489693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.233 [2024-07-15 20:24:42.489774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.233 [2024-07-15 20:24:42.489790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.233 [2024-07-15 20:24:42.489797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.233 [2024-07-15 20:24:42.489803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.233 [2024-07-15 20:24:42.489817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.233 qpair failed and we were unable to recover it. 00:29:45.233 [2024-07-15 20:24:42.499698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.233 [2024-07-15 20:24:42.499780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.233 [2024-07-15 20:24:42.499805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.233 [2024-07-15 20:24:42.499813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.233 [2024-07-15 20:24:42.499820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.234 [2024-07-15 20:24:42.499839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.234 qpair failed and we were unable to recover it. 00:29:45.234 [2024-07-15 20:24:42.509704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.234 [2024-07-15 20:24:42.509789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.234 [2024-07-15 20:24:42.509814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.234 [2024-07-15 20:24:42.509822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.234 [2024-07-15 20:24:42.509829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.234 [2024-07-15 20:24:42.509848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.234 qpair failed and we were unable to recover it. 00:29:45.234 [2024-07-15 20:24:42.519776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.234 [2024-07-15 20:24:42.519863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.234 [2024-07-15 20:24:42.519888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.234 [2024-07-15 20:24:42.519896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.234 [2024-07-15 20:24:42.519903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.234 [2024-07-15 20:24:42.519923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.234 qpair failed and we were unable to recover it. 00:29:45.234 [2024-07-15 20:24:42.529797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.234 [2024-07-15 20:24:42.529884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.234 [2024-07-15 20:24:42.529914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.234 [2024-07-15 20:24:42.529923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.234 [2024-07-15 20:24:42.529929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.234 [2024-07-15 20:24:42.529949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.234 qpair failed and we were unable to recover it. 00:29:45.234 [2024-07-15 20:24:42.539802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.234 [2024-07-15 20:24:42.539895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.234 [2024-07-15 20:24:42.539921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.234 [2024-07-15 20:24:42.539929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.234 [2024-07-15 20:24:42.539936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.234 [2024-07-15 20:24:42.539955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.234 qpair failed and we were unable to recover it. 00:29:45.234 [2024-07-15 20:24:42.549734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.234 [2024-07-15 20:24:42.549818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.234 [2024-07-15 20:24:42.549844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.234 [2024-07-15 20:24:42.549852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.234 [2024-07-15 20:24:42.549859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.234 [2024-07-15 20:24:42.549878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.234 qpair failed and we were unable to recover it. 00:29:45.234 [2024-07-15 20:24:42.559895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.234 [2024-07-15 20:24:42.559987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.234 [2024-07-15 20:24:42.560012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.234 [2024-07-15 20:24:42.560021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.234 [2024-07-15 20:24:42.560028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.234 [2024-07-15 20:24:42.560047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.234 qpair failed and we were unable to recover it. 00:29:45.234 [2024-07-15 20:24:42.569799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.234 [2024-07-15 20:24:42.569883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.234 [2024-07-15 20:24:42.569901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.234 [2024-07-15 20:24:42.569908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.234 [2024-07-15 20:24:42.569914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.234 [2024-07-15 20:24:42.569934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.234 qpair failed and we were unable to recover it. 00:29:45.234 [2024-07-15 20:24:42.579942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.234 [2024-07-15 20:24:42.580112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.234 [2024-07-15 20:24:42.580141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.234 [2024-07-15 20:24:42.580148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.234 [2024-07-15 20:24:42.580154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.234 [2024-07-15 20:24:42.580170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.234 qpair failed and we were unable to recover it. 00:29:45.234 [2024-07-15 20:24:42.589923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.234 [2024-07-15 20:24:42.590000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.234 [2024-07-15 20:24:42.590016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.234 [2024-07-15 20:24:42.590023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.234 [2024-07-15 20:24:42.590029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.234 [2024-07-15 20:24:42.590043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.234 qpair failed and we were unable to recover it. 00:29:45.234 [2024-07-15 20:24:42.599966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.234 [2024-07-15 20:24:42.600046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.234 [2024-07-15 20:24:42.600062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.234 [2024-07-15 20:24:42.600069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.234 [2024-07-15 20:24:42.600075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.234 [2024-07-15 20:24:42.600089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.234 qpair failed and we were unable to recover it. 00:29:45.234 [2024-07-15 20:24:42.609986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.234 [2024-07-15 20:24:42.610062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.234 [2024-07-15 20:24:42.610078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.234 [2024-07-15 20:24:42.610085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.234 [2024-07-15 20:24:42.610091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.234 [2024-07-15 20:24:42.610105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.234 qpair failed and we were unable to recover it. 00:29:45.234 [2024-07-15 20:24:42.620004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.234 [2024-07-15 20:24:42.620074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.234 [2024-07-15 20:24:42.620094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.234 [2024-07-15 20:24:42.620101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.234 [2024-07-15 20:24:42.620107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.234 [2024-07-15 20:24:42.620125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.234 qpair failed and we were unable to recover it. 00:29:45.234 [2024-07-15 20:24:42.630007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.234 [2024-07-15 20:24:42.630101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.234 [2024-07-15 20:24:42.630117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.234 [2024-07-15 20:24:42.630127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.234 [2024-07-15 20:24:42.630133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.234 [2024-07-15 20:24:42.630148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.234 qpair failed and we were unable to recover it. 00:29:45.234 [2024-07-15 20:24:42.639968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.234 [2024-07-15 20:24:42.640051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.234 [2024-07-15 20:24:42.640067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.234 [2024-07-15 20:24:42.640074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.234 [2024-07-15 20:24:42.640080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.234 [2024-07-15 20:24:42.640096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.234 qpair failed and we were unable to recover it. 00:29:45.234 [2024-07-15 20:24:42.650088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.235 [2024-07-15 20:24:42.650184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.235 [2024-07-15 20:24:42.650200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.235 [2024-07-15 20:24:42.650207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.235 [2024-07-15 20:24:42.650213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.235 [2024-07-15 20:24:42.650228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.235 qpair failed and we were unable to recover it. 00:29:45.235 [2024-07-15 20:24:42.660119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.235 [2024-07-15 20:24:42.660204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.235 [2024-07-15 20:24:42.660220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.235 [2024-07-15 20:24:42.660227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.235 [2024-07-15 20:24:42.660237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.235 [2024-07-15 20:24:42.660252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.235 qpair failed and we were unable to recover it. 00:29:45.497 [2024-07-15 20:24:42.670154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.497 [2024-07-15 20:24:42.670232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.497 [2024-07-15 20:24:42.670248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.497 [2024-07-15 20:24:42.670255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.497 [2024-07-15 20:24:42.670261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.497 [2024-07-15 20:24:42.670276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.497 qpair failed and we were unable to recover it. 00:29:45.497 [2024-07-15 20:24:42.680199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.497 [2024-07-15 20:24:42.680285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.497 [2024-07-15 20:24:42.680301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.497 [2024-07-15 20:24:42.680308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.497 [2024-07-15 20:24:42.680314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.497 [2024-07-15 20:24:42.680329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.497 qpair failed and we were unable to recover it. 00:29:45.497 [2024-07-15 20:24:42.690194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.497 [2024-07-15 20:24:42.690269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.497 [2024-07-15 20:24:42.690284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.497 [2024-07-15 20:24:42.690291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.497 [2024-07-15 20:24:42.690297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.498 [2024-07-15 20:24:42.690312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.498 qpair failed and we were unable to recover it. 00:29:45.498 [2024-07-15 20:24:42.700221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.498 [2024-07-15 20:24:42.700299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.498 [2024-07-15 20:24:42.700316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.498 [2024-07-15 20:24:42.700323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.498 [2024-07-15 20:24:42.700329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.498 [2024-07-15 20:24:42.700345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.498 qpair failed and we were unable to recover it. 00:29:45.498 [2024-07-15 20:24:42.710232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.498 [2024-07-15 20:24:42.710316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.498 [2024-07-15 20:24:42.710332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.498 [2024-07-15 20:24:42.710340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.498 [2024-07-15 20:24:42.710345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.498 [2024-07-15 20:24:42.710360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.498 qpair failed and we were unable to recover it. 00:29:45.498 [2024-07-15 20:24:42.720315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.498 [2024-07-15 20:24:42.720395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.498 [2024-07-15 20:24:42.720411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.498 [2024-07-15 20:24:42.720418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.498 [2024-07-15 20:24:42.720424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.498 [2024-07-15 20:24:42.720439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.498 qpair failed and we were unable to recover it. 00:29:45.498 [2024-07-15 20:24:42.730170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.498 [2024-07-15 20:24:42.730244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.498 [2024-07-15 20:24:42.730260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.498 [2024-07-15 20:24:42.730267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.498 [2024-07-15 20:24:42.730273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.498 [2024-07-15 20:24:42.730287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.498 qpair failed and we were unable to recover it. 00:29:45.498 [2024-07-15 20:24:42.740336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.498 [2024-07-15 20:24:42.740418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.498 [2024-07-15 20:24:42.740449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.498 [2024-07-15 20:24:42.740456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.498 [2024-07-15 20:24:42.740462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.498 [2024-07-15 20:24:42.740478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.498 qpair failed and we were unable to recover it. 00:29:45.498 [2024-07-15 20:24:42.750252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.498 [2024-07-15 20:24:42.750339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.498 [2024-07-15 20:24:42.750355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.498 [2024-07-15 20:24:42.750369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.498 [2024-07-15 20:24:42.750375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.498 [2024-07-15 20:24:42.750390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.498 qpair failed and we were unable to recover it. 00:29:45.498 [2024-07-15 20:24:42.760405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.498 [2024-07-15 20:24:42.760488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.498 [2024-07-15 20:24:42.760504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.498 [2024-07-15 20:24:42.760511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.498 [2024-07-15 20:24:42.760517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.498 [2024-07-15 20:24:42.760532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.498 qpair failed and we were unable to recover it. 00:29:45.498 [2024-07-15 20:24:42.770399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.498 [2024-07-15 20:24:42.770470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.498 [2024-07-15 20:24:42.770486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.498 [2024-07-15 20:24:42.770493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.498 [2024-07-15 20:24:42.770499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.498 [2024-07-15 20:24:42.770513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.498 qpair failed and we were unable to recover it. 00:29:45.498 [2024-07-15 20:24:42.780403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.498 [2024-07-15 20:24:42.780474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.498 [2024-07-15 20:24:42.780490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.498 [2024-07-15 20:24:42.780497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.498 [2024-07-15 20:24:42.780503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.498 [2024-07-15 20:24:42.780517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.498 qpair failed and we were unable to recover it. 00:29:45.498 [2024-07-15 20:24:42.790445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.498 [2024-07-15 20:24:42.790520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.498 [2024-07-15 20:24:42.790536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.498 [2024-07-15 20:24:42.790543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.498 [2024-07-15 20:24:42.790549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.498 [2024-07-15 20:24:42.790563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.498 qpair failed and we were unable to recover it. 00:29:45.498 [2024-07-15 20:24:42.800515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.498 [2024-07-15 20:24:42.800645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.498 [2024-07-15 20:24:42.800661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.498 [2024-07-15 20:24:42.800668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.498 [2024-07-15 20:24:42.800674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.498 [2024-07-15 20:24:42.800689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.498 qpair failed and we were unable to recover it. 00:29:45.498 [2024-07-15 20:24:42.810387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.498 [2024-07-15 20:24:42.810459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.498 [2024-07-15 20:24:42.810477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.498 [2024-07-15 20:24:42.810484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.498 [2024-07-15 20:24:42.810490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.498 [2024-07-15 20:24:42.810505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.498 qpair failed and we were unable to recover it. 00:29:45.498 [2024-07-15 20:24:42.820485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.498 [2024-07-15 20:24:42.820594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.499 [2024-07-15 20:24:42.820609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.499 [2024-07-15 20:24:42.820616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.499 [2024-07-15 20:24:42.820623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.499 [2024-07-15 20:24:42.820637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.499 qpair failed and we were unable to recover it. 00:29:45.499 [2024-07-15 20:24:42.830558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.499 [2024-07-15 20:24:42.830634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.499 [2024-07-15 20:24:42.830650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.499 [2024-07-15 20:24:42.830657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.499 [2024-07-15 20:24:42.830663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.499 [2024-07-15 20:24:42.830678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.499 qpair failed and we were unable to recover it. 00:29:45.499 [2024-07-15 20:24:42.840592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.499 [2024-07-15 20:24:42.840673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.499 [2024-07-15 20:24:42.840689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.499 [2024-07-15 20:24:42.840699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.499 [2024-07-15 20:24:42.840705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.499 [2024-07-15 20:24:42.840720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.499 qpair failed and we were unable to recover it. 00:29:45.499 [2024-07-15 20:24:42.850614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.499 [2024-07-15 20:24:42.850701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.499 [2024-07-15 20:24:42.850717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.499 [2024-07-15 20:24:42.850724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.499 [2024-07-15 20:24:42.850730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.499 [2024-07-15 20:24:42.850745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.499 qpair failed and we were unable to recover it. 00:29:45.499 [2024-07-15 20:24:42.860641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.499 [2024-07-15 20:24:42.860723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.499 [2024-07-15 20:24:42.860739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.499 [2024-07-15 20:24:42.860746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.499 [2024-07-15 20:24:42.860751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.499 [2024-07-15 20:24:42.860766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.499 qpair failed and we were unable to recover it. 00:29:45.499 [2024-07-15 20:24:42.870662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.499 [2024-07-15 20:24:42.870744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.499 [2024-07-15 20:24:42.870769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.499 [2024-07-15 20:24:42.870778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.499 [2024-07-15 20:24:42.870785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.499 [2024-07-15 20:24:42.870804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.499 qpair failed and we were unable to recover it. 00:29:45.499 [2024-07-15 20:24:42.880729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.499 [2024-07-15 20:24:42.880817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.499 [2024-07-15 20:24:42.880842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.499 [2024-07-15 20:24:42.880851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.499 [2024-07-15 20:24:42.880858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.499 [2024-07-15 20:24:42.880877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.499 qpair failed and we were unable to recover it. 00:29:45.499 [2024-07-15 20:24:42.890770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.499 [2024-07-15 20:24:42.890853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.499 [2024-07-15 20:24:42.890879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.499 [2024-07-15 20:24:42.890888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.499 [2024-07-15 20:24:42.890894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.499 [2024-07-15 20:24:42.890913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.499 qpair failed and we were unable to recover it. 00:29:45.499 [2024-07-15 20:24:42.900708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.499 [2024-07-15 20:24:42.900800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.499 [2024-07-15 20:24:42.900825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.499 [2024-07-15 20:24:42.900834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.499 [2024-07-15 20:24:42.900841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.499 [2024-07-15 20:24:42.900860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.499 qpair failed and we were unable to recover it. 00:29:45.499 [2024-07-15 20:24:42.910771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.499 [2024-07-15 20:24:42.910855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.499 [2024-07-15 20:24:42.910873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.499 [2024-07-15 20:24:42.910880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.499 [2024-07-15 20:24:42.910887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.499 [2024-07-15 20:24:42.910902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.499 qpair failed and we were unable to recover it. 00:29:45.499 [2024-07-15 20:24:42.920801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.499 [2024-07-15 20:24:42.920889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.499 [2024-07-15 20:24:42.920905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.499 [2024-07-15 20:24:42.920912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.499 [2024-07-15 20:24:42.920918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.499 [2024-07-15 20:24:42.920933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.499 qpair failed and we were unable to recover it. 00:29:45.762 [2024-07-15 20:24:42.930857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.762 [2024-07-15 20:24:42.930934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.762 [2024-07-15 20:24:42.930955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.762 [2024-07-15 20:24:42.930962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.762 [2024-07-15 20:24:42.930968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.762 [2024-07-15 20:24:42.930983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.762 qpair failed and we were unable to recover it. 00:29:45.762 [2024-07-15 20:24:42.940853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.762 [2024-07-15 20:24:42.940936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.762 [2024-07-15 20:24:42.940961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.762 [2024-07-15 20:24:42.940970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.762 [2024-07-15 20:24:42.940976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.762 [2024-07-15 20:24:42.940996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.762 qpair failed and we were unable to recover it. 00:29:45.762 [2024-07-15 20:24:42.950893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.762 [2024-07-15 20:24:42.950969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.762 [2024-07-15 20:24:42.950987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.762 [2024-07-15 20:24:42.950994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.762 [2024-07-15 20:24:42.951001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.762 [2024-07-15 20:24:42.951016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.762 qpair failed and we were unable to recover it. 00:29:45.762 [2024-07-15 20:24:42.960950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.762 [2024-07-15 20:24:42.961066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.762 [2024-07-15 20:24:42.961084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.762 [2024-07-15 20:24:42.961092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.762 [2024-07-15 20:24:42.961098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.762 [2024-07-15 20:24:42.961114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.762 qpair failed and we were unable to recover it. 00:29:45.762 [2024-07-15 20:24:42.970890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.762 [2024-07-15 20:24:42.970977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.762 [2024-07-15 20:24:42.970994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.762 [2024-07-15 20:24:42.971001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.762 [2024-07-15 20:24:42.971008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.763 [2024-07-15 20:24:42.971028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.763 qpair failed and we were unable to recover it. 00:29:45.763 [2024-07-15 20:24:42.980963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.763 [2024-07-15 20:24:42.981039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.763 [2024-07-15 20:24:42.981056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.763 [2024-07-15 20:24:42.981063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.763 [2024-07-15 20:24:42.981069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.763 [2024-07-15 20:24:42.981084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.763 qpair failed and we were unable to recover it. 00:29:45.763 [2024-07-15 20:24:42.990977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.763 [2024-07-15 20:24:42.991051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.763 [2024-07-15 20:24:42.991067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.763 [2024-07-15 20:24:42.991074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.763 [2024-07-15 20:24:42.991080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.763 [2024-07-15 20:24:42.991095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.763 qpair failed and we were unable to recover it. 00:29:45.763 [2024-07-15 20:24:43.001017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.763 [2024-07-15 20:24:43.001102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.763 [2024-07-15 20:24:43.001118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.763 [2024-07-15 20:24:43.001130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.763 [2024-07-15 20:24:43.001136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.763 [2024-07-15 20:24:43.001152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.763 qpair failed and we were unable to recover it. 00:29:45.763 [2024-07-15 20:24:43.011018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.763 [2024-07-15 20:24:43.011094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.763 [2024-07-15 20:24:43.011110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.763 [2024-07-15 20:24:43.011117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.763 [2024-07-15 20:24:43.011127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.763 [2024-07-15 20:24:43.011142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.763 qpair failed and we were unable to recover it. 00:29:45.763 [2024-07-15 20:24:43.021079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.763 [2024-07-15 20:24:43.021160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.763 [2024-07-15 20:24:43.021180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.763 [2024-07-15 20:24:43.021187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.763 [2024-07-15 20:24:43.021193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.763 [2024-07-15 20:24:43.021208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.763 qpair failed and we were unable to recover it. 00:29:45.763 [2024-07-15 20:24:43.031074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.763 [2024-07-15 20:24:43.031250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.763 [2024-07-15 20:24:43.031277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.763 [2024-07-15 20:24:43.031284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.763 [2024-07-15 20:24:43.031290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.763 [2024-07-15 20:24:43.031307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.763 qpair failed and we were unable to recover it. 00:29:45.763 [2024-07-15 20:24:43.041146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.763 [2024-07-15 20:24:43.041233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.763 [2024-07-15 20:24:43.041248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.763 [2024-07-15 20:24:43.041255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.763 [2024-07-15 20:24:43.041261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.763 [2024-07-15 20:24:43.041276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.763 qpair failed and we were unable to recover it. 00:29:45.763 [2024-07-15 20:24:43.051155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.763 [2024-07-15 20:24:43.051270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.763 [2024-07-15 20:24:43.051285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.763 [2024-07-15 20:24:43.051292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.763 [2024-07-15 20:24:43.051298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.763 [2024-07-15 20:24:43.051313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.763 qpair failed and we were unable to recover it. 00:29:45.763 [2024-07-15 20:24:43.061184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.763 [2024-07-15 20:24:43.061259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.763 [2024-07-15 20:24:43.061275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.763 [2024-07-15 20:24:43.061282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.763 [2024-07-15 20:24:43.061292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.763 [2024-07-15 20:24:43.061307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.763 qpair failed and we were unable to recover it. 00:29:45.763 [2024-07-15 20:24:43.071260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.763 [2024-07-15 20:24:43.071336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.763 [2024-07-15 20:24:43.071352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.763 [2024-07-15 20:24:43.071359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.763 [2024-07-15 20:24:43.071365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.763 [2024-07-15 20:24:43.071379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.763 qpair failed and we were unable to recover it. 00:29:45.763 [2024-07-15 20:24:43.081198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.763 [2024-07-15 20:24:43.081285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.763 [2024-07-15 20:24:43.081300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.763 [2024-07-15 20:24:43.081307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.763 [2024-07-15 20:24:43.081313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.763 [2024-07-15 20:24:43.081328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.763 qpair failed and we were unable to recover it. 00:29:45.763 [2024-07-15 20:24:43.091254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.763 [2024-07-15 20:24:43.091365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.763 [2024-07-15 20:24:43.091381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.763 [2024-07-15 20:24:43.091388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.763 [2024-07-15 20:24:43.091393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.763 [2024-07-15 20:24:43.091408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.763 qpair failed and we were unable to recover it. 00:29:45.763 [2024-07-15 20:24:43.101271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.763 [2024-07-15 20:24:43.101347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.763 [2024-07-15 20:24:43.101362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.763 [2024-07-15 20:24:43.101369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.763 [2024-07-15 20:24:43.101375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.763 [2024-07-15 20:24:43.101389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.763 qpair failed and we were unable to recover it. 00:29:45.763 [2024-07-15 20:24:43.111325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.764 [2024-07-15 20:24:43.111407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.764 [2024-07-15 20:24:43.111423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.764 [2024-07-15 20:24:43.111430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.764 [2024-07-15 20:24:43.111436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.764 [2024-07-15 20:24:43.111450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.764 qpair failed and we were unable to recover it. 00:29:45.764 [2024-07-15 20:24:43.121384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.764 [2024-07-15 20:24:43.121464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.764 [2024-07-15 20:24:43.121480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.764 [2024-07-15 20:24:43.121487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.764 [2024-07-15 20:24:43.121493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.764 [2024-07-15 20:24:43.121508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.764 qpair failed and we were unable to recover it. 00:29:45.764 [2024-07-15 20:24:43.131322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.764 [2024-07-15 20:24:43.131396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.764 [2024-07-15 20:24:43.131412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.764 [2024-07-15 20:24:43.131419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.764 [2024-07-15 20:24:43.131425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.764 [2024-07-15 20:24:43.131439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.764 qpair failed and we were unable to recover it. 00:29:45.764 [2024-07-15 20:24:43.141384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.764 [2024-07-15 20:24:43.141463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.764 [2024-07-15 20:24:43.141479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.764 [2024-07-15 20:24:43.141486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.764 [2024-07-15 20:24:43.141492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.764 [2024-07-15 20:24:43.141506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.764 qpair failed and we were unable to recover it. 00:29:45.764 [2024-07-15 20:24:43.151417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.764 [2024-07-15 20:24:43.151497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.764 [2024-07-15 20:24:43.151513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.764 [2024-07-15 20:24:43.151520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.764 [2024-07-15 20:24:43.151530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.764 [2024-07-15 20:24:43.151545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.764 qpair failed and we were unable to recover it. 00:29:45.764 [2024-07-15 20:24:43.161470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.764 [2024-07-15 20:24:43.161556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.764 [2024-07-15 20:24:43.161573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.764 [2024-07-15 20:24:43.161581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.764 [2024-07-15 20:24:43.161589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.764 [2024-07-15 20:24:43.161605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.764 qpair failed and we were unable to recover it. 00:29:45.764 [2024-07-15 20:24:43.171482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.764 [2024-07-15 20:24:43.171655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.764 [2024-07-15 20:24:43.171671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.764 [2024-07-15 20:24:43.171678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.764 [2024-07-15 20:24:43.171684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.764 [2024-07-15 20:24:43.171698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.764 qpair failed and we were unable to recover it. 00:29:45.764 [2024-07-15 20:24:43.181496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.764 [2024-07-15 20:24:43.181573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.764 [2024-07-15 20:24:43.181589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.764 [2024-07-15 20:24:43.181596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.764 [2024-07-15 20:24:43.181602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.764 [2024-07-15 20:24:43.181617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.764 qpair failed and we were unable to recover it. 00:29:45.764 [2024-07-15 20:24:43.191530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.764 [2024-07-15 20:24:43.191607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.764 [2024-07-15 20:24:43.191622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.764 [2024-07-15 20:24:43.191629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.764 [2024-07-15 20:24:43.191635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:45.764 [2024-07-15 20:24:43.191650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.764 qpair failed and we were unable to recover it. 00:29:46.037 [2024-07-15 20:24:43.201609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.037 [2024-07-15 20:24:43.201704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.037 [2024-07-15 20:24:43.201720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.037 [2024-07-15 20:24:43.201727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.037 [2024-07-15 20:24:43.201733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.037 [2024-07-15 20:24:43.201748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.037 qpair failed and we were unable to recover it. 00:29:46.037 [2024-07-15 20:24:43.211565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.037 [2024-07-15 20:24:43.211658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.037 [2024-07-15 20:24:43.211674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.037 [2024-07-15 20:24:43.211681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.037 [2024-07-15 20:24:43.211687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.037 [2024-07-15 20:24:43.211702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.037 qpair failed and we were unable to recover it. 00:29:46.037 [2024-07-15 20:24:43.221513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.037 [2024-07-15 20:24:43.221622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.037 [2024-07-15 20:24:43.221637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.037 [2024-07-15 20:24:43.221644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.037 [2024-07-15 20:24:43.221650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.037 [2024-07-15 20:24:43.221665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.037 qpair failed and we were unable to recover it. 00:29:46.037 [2024-07-15 20:24:43.231641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.037 [2024-07-15 20:24:43.231733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.037 [2024-07-15 20:24:43.231749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.037 [2024-07-15 20:24:43.231756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.037 [2024-07-15 20:24:43.231762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.037 [2024-07-15 20:24:43.231777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.037 qpair failed and we were unable to recover it. 00:29:46.037 [2024-07-15 20:24:43.241702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.037 [2024-07-15 20:24:43.241801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.037 [2024-07-15 20:24:43.241826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.037 [2024-07-15 20:24:43.241840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.037 [2024-07-15 20:24:43.241846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.037 [2024-07-15 20:24:43.241867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.037 qpair failed and we were unable to recover it. 00:29:46.037 [2024-07-15 20:24:43.251701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.037 [2024-07-15 20:24:43.251795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.037 [2024-07-15 20:24:43.251821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.037 [2024-07-15 20:24:43.251829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.037 [2024-07-15 20:24:43.251836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.037 [2024-07-15 20:24:43.251855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.037 qpair failed and we were unable to recover it. 00:29:46.037 [2024-07-15 20:24:43.261700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.037 [2024-07-15 20:24:43.261786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.037 [2024-07-15 20:24:43.261811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.037 [2024-07-15 20:24:43.261820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.037 [2024-07-15 20:24:43.261826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.037 [2024-07-15 20:24:43.261846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.037 qpair failed and we were unable to recover it. 00:29:46.037 [2024-07-15 20:24:43.271679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.037 [2024-07-15 20:24:43.271760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.037 [2024-07-15 20:24:43.271778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.037 [2024-07-15 20:24:43.271785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.037 [2024-07-15 20:24:43.271791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.037 [2024-07-15 20:24:43.271806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.037 qpair failed and we were unable to recover it. 00:29:46.037 [2024-07-15 20:24:43.281788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.038 [2024-07-15 20:24:43.281880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.038 [2024-07-15 20:24:43.281905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.038 [2024-07-15 20:24:43.281913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.038 [2024-07-15 20:24:43.281920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.038 [2024-07-15 20:24:43.281940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.038 qpair failed and we were unable to recover it. 00:29:46.038 [2024-07-15 20:24:43.291765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.038 [2024-07-15 20:24:43.291851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.038 [2024-07-15 20:24:43.291876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.038 [2024-07-15 20:24:43.291885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.038 [2024-07-15 20:24:43.291891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.038 [2024-07-15 20:24:43.291910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.038 qpair failed and we were unable to recover it. 00:29:46.038 [2024-07-15 20:24:43.301820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.038 [2024-07-15 20:24:43.301900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.038 [2024-07-15 20:24:43.301926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.038 [2024-07-15 20:24:43.301934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.038 [2024-07-15 20:24:43.301941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.038 [2024-07-15 20:24:43.301960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.038 qpair failed and we were unable to recover it. 00:29:46.038 [2024-07-15 20:24:43.311857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.038 [2024-07-15 20:24:43.311940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.038 [2024-07-15 20:24:43.311965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.038 [2024-07-15 20:24:43.311974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.038 [2024-07-15 20:24:43.311981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.038 [2024-07-15 20:24:43.312000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.038 qpair failed and we were unable to recover it. 00:29:46.038 [2024-07-15 20:24:43.321921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.038 [2024-07-15 20:24:43.322011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.038 [2024-07-15 20:24:43.322029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.038 [2024-07-15 20:24:43.322036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.038 [2024-07-15 20:24:43.322042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.038 [2024-07-15 20:24:43.322058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.038 qpair failed and we were unable to recover it. 00:29:46.038 [2024-07-15 20:24:43.331907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.038 [2024-07-15 20:24:43.331984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.038 [2024-07-15 20:24:43.332005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.038 [2024-07-15 20:24:43.332013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.038 [2024-07-15 20:24:43.332019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.038 [2024-07-15 20:24:43.332035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.038 qpair failed and we were unable to recover it. 00:29:46.038 [2024-07-15 20:24:43.341901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.038 [2024-07-15 20:24:43.341976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.038 [2024-07-15 20:24:43.341993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.038 [2024-07-15 20:24:43.342000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.038 [2024-07-15 20:24:43.342006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.038 [2024-07-15 20:24:43.342022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.038 qpair failed and we were unable to recover it. 00:29:46.038 [2024-07-15 20:24:43.351965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.038 [2024-07-15 20:24:43.352072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.038 [2024-07-15 20:24:43.352090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.038 [2024-07-15 20:24:43.352100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.038 [2024-07-15 20:24:43.352106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.038 [2024-07-15 20:24:43.352127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.038 qpair failed and we were unable to recover it. 00:29:46.038 [2024-07-15 20:24:43.361991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.038 [2024-07-15 20:24:43.362075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.038 [2024-07-15 20:24:43.362092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.038 [2024-07-15 20:24:43.362099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.038 [2024-07-15 20:24:43.362104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.038 [2024-07-15 20:24:43.362120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.038 qpair failed and we were unable to recover it. 00:29:46.038 [2024-07-15 20:24:43.371983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.038 [2024-07-15 20:24:43.372059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.038 [2024-07-15 20:24:43.372075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.038 [2024-07-15 20:24:43.372082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.038 [2024-07-15 20:24:43.372088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.038 [2024-07-15 20:24:43.372106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.038 qpair failed and we were unable to recover it. 00:29:46.038 [2024-07-15 20:24:43.381911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.038 [2024-07-15 20:24:43.381989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.038 [2024-07-15 20:24:43.382005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.038 [2024-07-15 20:24:43.382012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.038 [2024-07-15 20:24:43.382018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.038 [2024-07-15 20:24:43.382033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.038 qpair failed and we were unable to recover it. 00:29:46.038 [2024-07-15 20:24:43.391998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.038 [2024-07-15 20:24:43.392079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.038 [2024-07-15 20:24:43.392095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.038 [2024-07-15 20:24:43.392101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.038 [2024-07-15 20:24:43.392107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.038 [2024-07-15 20:24:43.392125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.038 qpair failed and we were unable to recover it. 00:29:46.038 [2024-07-15 20:24:43.402100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.038 [2024-07-15 20:24:43.402189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.038 [2024-07-15 20:24:43.402205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.039 [2024-07-15 20:24:43.402212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.039 [2024-07-15 20:24:43.402218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.039 [2024-07-15 20:24:43.402233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.039 qpair failed and we were unable to recover it. 00:29:46.039 [2024-07-15 20:24:43.412107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.039 [2024-07-15 20:24:43.412191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.039 [2024-07-15 20:24:43.412207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.039 [2024-07-15 20:24:43.412213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.039 [2024-07-15 20:24:43.412220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.039 [2024-07-15 20:24:43.412236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.039 qpair failed and we were unable to recover it. 00:29:46.039 [2024-07-15 20:24:43.422165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.039 [2024-07-15 20:24:43.422245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.039 [2024-07-15 20:24:43.422265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.039 [2024-07-15 20:24:43.422272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.039 [2024-07-15 20:24:43.422278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.039 [2024-07-15 20:24:43.422294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.039 qpair failed and we were unable to recover it. 00:29:46.039 [2024-07-15 20:24:43.432157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.039 [2024-07-15 20:24:43.432238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.039 [2024-07-15 20:24:43.432254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.039 [2024-07-15 20:24:43.432261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.039 [2024-07-15 20:24:43.432267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.039 [2024-07-15 20:24:43.432282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.039 qpair failed and we were unable to recover it. 00:29:46.039 [2024-07-15 20:24:43.442147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.039 [2024-07-15 20:24:43.442234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.039 [2024-07-15 20:24:43.442250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.039 [2024-07-15 20:24:43.442257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.039 [2024-07-15 20:24:43.442263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.039 [2024-07-15 20:24:43.442278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.039 qpair failed and we were unable to recover it. 00:29:46.039 [2024-07-15 20:24:43.452221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.039 [2024-07-15 20:24:43.452296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.039 [2024-07-15 20:24:43.452312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.039 [2024-07-15 20:24:43.452319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.039 [2024-07-15 20:24:43.452325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.039 [2024-07-15 20:24:43.452340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.039 qpair failed and we were unable to recover it. 00:29:46.039 [2024-07-15 20:24:43.462253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.039 [2024-07-15 20:24:43.462325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.039 [2024-07-15 20:24:43.462341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.039 [2024-07-15 20:24:43.462348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.039 [2024-07-15 20:24:43.462361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.039 [2024-07-15 20:24:43.462376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.039 qpair failed and we were unable to recover it. 00:29:46.303 [2024-07-15 20:24:43.472280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.303 [2024-07-15 20:24:43.472359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.303 [2024-07-15 20:24:43.472376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.303 [2024-07-15 20:24:43.472383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.303 [2024-07-15 20:24:43.472388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.303 [2024-07-15 20:24:43.472403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.303 qpair failed and we were unable to recover it. 00:29:46.303 [2024-07-15 20:24:43.482358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.303 [2024-07-15 20:24:43.482448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.303 [2024-07-15 20:24:43.482464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.303 [2024-07-15 20:24:43.482471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.303 [2024-07-15 20:24:43.482477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.303 [2024-07-15 20:24:43.482492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.303 qpair failed and we were unable to recover it. 00:29:46.303 [2024-07-15 20:24:43.492351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.303 [2024-07-15 20:24:43.492461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.303 [2024-07-15 20:24:43.492478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.303 [2024-07-15 20:24:43.492487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.303 [2024-07-15 20:24:43.492494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.303 [2024-07-15 20:24:43.492510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.303 qpair failed and we were unable to recover it. 00:29:46.303 [2024-07-15 20:24:43.502350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.303 [2024-07-15 20:24:43.502427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.303 [2024-07-15 20:24:43.502443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.303 [2024-07-15 20:24:43.502450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.303 [2024-07-15 20:24:43.502456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.303 [2024-07-15 20:24:43.502471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.303 qpair failed and we were unable to recover it. 00:29:46.303 [2024-07-15 20:24:43.512363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.303 [2024-07-15 20:24:43.512447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.303 [2024-07-15 20:24:43.512463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.303 [2024-07-15 20:24:43.512470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.303 [2024-07-15 20:24:43.512476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.303 [2024-07-15 20:24:43.512491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.303 qpair failed and we were unable to recover it. 00:29:46.303 [2024-07-15 20:24:43.522434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.303 [2024-07-15 20:24:43.522518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.304 [2024-07-15 20:24:43.522533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.304 [2024-07-15 20:24:43.522540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.304 [2024-07-15 20:24:43.522546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.304 [2024-07-15 20:24:43.522561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.304 qpair failed and we were unable to recover it. 00:29:46.304 [2024-07-15 20:24:43.532317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.304 [2024-07-15 20:24:43.532414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.304 [2024-07-15 20:24:43.532431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.304 [2024-07-15 20:24:43.532437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.304 [2024-07-15 20:24:43.532443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.304 [2024-07-15 20:24:43.532458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.304 qpair failed and we were unable to recover it. 00:29:46.304 [2024-07-15 20:24:43.542477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.304 [2024-07-15 20:24:43.542561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.304 [2024-07-15 20:24:43.542576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.304 [2024-07-15 20:24:43.542584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.304 [2024-07-15 20:24:43.542590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.304 [2024-07-15 20:24:43.542604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.304 qpair failed and we were unable to recover it. 00:29:46.304 [2024-07-15 20:24:43.552492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.304 [2024-07-15 20:24:43.552569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.304 [2024-07-15 20:24:43.552585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.304 [2024-07-15 20:24:43.552592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.304 [2024-07-15 20:24:43.552602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.304 [2024-07-15 20:24:43.552616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.304 qpair failed and we were unable to recover it. 00:29:46.304 [2024-07-15 20:24:43.562564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.304 [2024-07-15 20:24:43.562689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.304 [2024-07-15 20:24:43.562705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.304 [2024-07-15 20:24:43.562712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.304 [2024-07-15 20:24:43.562718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.304 [2024-07-15 20:24:43.562733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.304 qpair failed and we were unable to recover it. 00:29:46.304 [2024-07-15 20:24:43.572441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.304 [2024-07-15 20:24:43.572515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.304 [2024-07-15 20:24:43.572531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.304 [2024-07-15 20:24:43.572538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.304 [2024-07-15 20:24:43.572544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.304 [2024-07-15 20:24:43.572559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.304 qpair failed and we were unable to recover it. 00:29:46.304 [2024-07-15 20:24:43.582553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.304 [2024-07-15 20:24:43.582628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.304 [2024-07-15 20:24:43.582644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.304 [2024-07-15 20:24:43.582651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.304 [2024-07-15 20:24:43.582657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.304 [2024-07-15 20:24:43.582672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.304 qpair failed and we were unable to recover it. 00:29:46.304 [2024-07-15 20:24:43.592608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.304 [2024-07-15 20:24:43.592683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.304 [2024-07-15 20:24:43.592699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.304 [2024-07-15 20:24:43.592706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.304 [2024-07-15 20:24:43.592711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.304 [2024-07-15 20:24:43.592726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.304 qpair failed and we were unable to recover it. 00:29:46.304 [2024-07-15 20:24:43.602687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.304 [2024-07-15 20:24:43.602778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.304 [2024-07-15 20:24:43.602793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.304 [2024-07-15 20:24:43.602800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.304 [2024-07-15 20:24:43.602806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.304 [2024-07-15 20:24:43.602821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.304 qpair failed and we were unable to recover it. 00:29:46.304 [2024-07-15 20:24:43.612653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.304 [2024-07-15 20:24:43.612725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.304 [2024-07-15 20:24:43.612741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.304 [2024-07-15 20:24:43.612748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.304 [2024-07-15 20:24:43.612753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.304 [2024-07-15 20:24:43.612768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.304 qpair failed and we were unable to recover it. 00:29:46.304 [2024-07-15 20:24:43.622737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.304 [2024-07-15 20:24:43.622823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.304 [2024-07-15 20:24:43.622849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.304 [2024-07-15 20:24:43.622858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.304 [2024-07-15 20:24:43.622864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.304 [2024-07-15 20:24:43.622883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.304 qpair failed and we were unable to recover it. 00:29:46.304 [2024-07-15 20:24:43.632597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.304 [2024-07-15 20:24:43.632681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.304 [2024-07-15 20:24:43.632706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.304 [2024-07-15 20:24:43.632715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.304 [2024-07-15 20:24:43.632722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.304 [2024-07-15 20:24:43.632741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.304 qpair failed and we were unable to recover it. 00:29:46.304 [2024-07-15 20:24:43.642850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.304 [2024-07-15 20:24:43.642945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.304 [2024-07-15 20:24:43.642971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.304 [2024-07-15 20:24:43.642984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.304 [2024-07-15 20:24:43.642991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.304 [2024-07-15 20:24:43.643012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.304 qpair failed and we were unable to recover it. 00:29:46.304 [2024-07-15 20:24:43.652829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.304 [2024-07-15 20:24:43.652926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.304 [2024-07-15 20:24:43.652944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.304 [2024-07-15 20:24:43.652951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.304 [2024-07-15 20:24:43.652957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.304 [2024-07-15 20:24:43.652974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.304 qpair failed and we were unable to recover it. 00:29:46.304 [2024-07-15 20:24:43.662764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.304 [2024-07-15 20:24:43.662847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.304 [2024-07-15 20:24:43.662872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.304 [2024-07-15 20:24:43.662880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.305 [2024-07-15 20:24:43.662887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.305 [2024-07-15 20:24:43.662906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.305 qpair failed and we were unable to recover it. 00:29:46.305 [2024-07-15 20:24:43.672819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.305 [2024-07-15 20:24:43.672902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.305 [2024-07-15 20:24:43.672919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.305 [2024-07-15 20:24:43.672926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.305 [2024-07-15 20:24:43.672932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.305 [2024-07-15 20:24:43.672948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.305 qpair failed and we were unable to recover it. 00:29:46.305 [2024-07-15 20:24:43.682898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.305 [2024-07-15 20:24:43.682971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.305 [2024-07-15 20:24:43.682988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.305 [2024-07-15 20:24:43.682995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.305 [2024-07-15 20:24:43.683001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.305 [2024-07-15 20:24:43.683017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.305 qpair failed and we were unable to recover it. 00:29:46.305 [2024-07-15 20:24:43.692852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.305 [2024-07-15 20:24:43.692931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.305 [2024-07-15 20:24:43.692947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.305 [2024-07-15 20:24:43.692954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.305 [2024-07-15 20:24:43.692960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.305 [2024-07-15 20:24:43.692975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.305 qpair failed and we were unable to recover it. 00:29:46.305 [2024-07-15 20:24:43.702871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.305 [2024-07-15 20:24:43.702948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.305 [2024-07-15 20:24:43.702964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.305 [2024-07-15 20:24:43.702971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.305 [2024-07-15 20:24:43.702977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.305 [2024-07-15 20:24:43.702992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.305 qpair failed and we were unable to recover it. 00:29:46.305 [2024-07-15 20:24:43.712935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.305 [2024-07-15 20:24:43.713011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.305 [2024-07-15 20:24:43.713027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.305 [2024-07-15 20:24:43.713034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.305 [2024-07-15 20:24:43.713040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.305 [2024-07-15 20:24:43.713055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.305 qpair failed and we were unable to recover it. 00:29:46.305 [2024-07-15 20:24:43.723012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.305 [2024-07-15 20:24:43.723101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.305 [2024-07-15 20:24:43.723118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.305 [2024-07-15 20:24:43.723129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.305 [2024-07-15 20:24:43.723135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.305 [2024-07-15 20:24:43.723150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.305 qpair failed and we were unable to recover it. 00:29:46.305 [2024-07-15 20:24:43.732982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.305 [2024-07-15 20:24:43.733059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.305 [2024-07-15 20:24:43.733080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.305 [2024-07-15 20:24:43.733090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.305 [2024-07-15 20:24:43.733097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.305 [2024-07-15 20:24:43.733114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.305 qpair failed and we were unable to recover it. 00:29:46.567 [2024-07-15 20:24:43.742893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.742966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.742983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.742991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.742997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.743012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.753043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.753138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.753155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.753163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.753169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.753184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.763114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.763236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.763252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.763259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.763265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.763280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.773149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.773258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.773274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.773281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.773287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.773306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.783146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.783221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.783237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.783244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.783250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.783265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.793127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.793208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.793224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.793231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.793236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.793251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.803195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.803282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.803298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.803305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.803311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.803326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.813206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.813285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.813301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.813308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.813314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.813329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.823197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.823270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.823289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.823296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.823302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.823317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.833257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.833335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.833351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.833358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.833363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.833378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.843336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.843419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.843435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.843442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.843447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.843462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.853277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.853360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.853376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.853382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.853388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.853403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.863353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.863428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.863444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.863451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.863457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.863476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.873356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.873432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.873448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.873455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.873460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.873476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.883398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.883479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.883496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.883503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.883508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.883523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.893435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.893521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.893536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.893543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.893549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.893564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.903441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.903513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.903529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.903536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.903542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.903556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.913480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.913557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.913572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.913579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.913585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.913599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.923413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.923500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.923516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.923522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.923528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.923542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.933535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.933703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.933718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.933725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.933731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.933746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.943568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.943656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.943672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.943678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.943684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.943699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.953568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.953643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.953658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.953665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.953675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.953690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.963623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.963709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.963734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.963743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.963750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.963769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.973657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.973743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.973768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.973777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.973783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.973803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.983591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.983677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.983702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.568 [2024-07-15 20:24:43.983711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.568 [2024-07-15 20:24:43.983717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.568 [2024-07-15 20:24:43.983736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.568 qpair failed and we were unable to recover it. 00:29:46.568 [2024-07-15 20:24:43.993694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.568 [2024-07-15 20:24:43.993777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.568 [2024-07-15 20:24:43.993802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.569 [2024-07-15 20:24:43.993810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.569 [2024-07-15 20:24:43.993817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.569 [2024-07-15 20:24:43.993837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.569 qpair failed and we were unable to recover it. 00:29:46.832 [2024-07-15 20:24:44.003766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.832 [2024-07-15 20:24:44.003862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.832 [2024-07-15 20:24:44.003888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.832 [2024-07-15 20:24:44.003896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.832 [2024-07-15 20:24:44.003903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.832 [2024-07-15 20:24:44.003922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.832 qpair failed and we were unable to recover it. 00:29:46.832 [2024-07-15 20:24:44.013759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.832 [2024-07-15 20:24:44.013884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.832 [2024-07-15 20:24:44.013902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.832 [2024-07-15 20:24:44.013909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.832 [2024-07-15 20:24:44.013915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.832 [2024-07-15 20:24:44.013931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.832 qpair failed and we were unable to recover it. 00:29:46.832 [2024-07-15 20:24:44.023752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.832 [2024-07-15 20:24:44.023833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.832 [2024-07-15 20:24:44.023858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.832 [2024-07-15 20:24:44.023867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.832 [2024-07-15 20:24:44.023873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.832 [2024-07-15 20:24:44.023893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.832 qpair failed and we were unable to recover it. 00:29:46.832 [2024-07-15 20:24:44.033684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.832 [2024-07-15 20:24:44.033762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.832 [2024-07-15 20:24:44.033779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.832 [2024-07-15 20:24:44.033786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.832 [2024-07-15 20:24:44.033792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.832 [2024-07-15 20:24:44.033808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.832 qpair failed and we were unable to recover it. 00:29:46.832 [2024-07-15 20:24:44.043919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.832 [2024-07-15 20:24:44.044031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.832 [2024-07-15 20:24:44.044049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.832 [2024-07-15 20:24:44.044062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.832 [2024-07-15 20:24:44.044068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.832 [2024-07-15 20:24:44.044087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.832 qpair failed and we were unable to recover it. 00:29:46.832 [2024-07-15 20:24:44.053765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.832 [2024-07-15 20:24:44.053838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.832 [2024-07-15 20:24:44.053854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.832 [2024-07-15 20:24:44.053861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.832 [2024-07-15 20:24:44.053867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.832 [2024-07-15 20:24:44.053882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.832 qpair failed and we were unable to recover it. 00:29:46.832 [2024-07-15 20:24:44.063884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.832 [2024-07-15 20:24:44.063959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.832 [2024-07-15 20:24:44.063975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.832 [2024-07-15 20:24:44.063982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.832 [2024-07-15 20:24:44.063988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.832 [2024-07-15 20:24:44.064003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.832 qpair failed and we were unable to recover it. 00:29:46.832 [2024-07-15 20:24:44.073800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.832 [2024-07-15 20:24:44.073877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.832 [2024-07-15 20:24:44.073893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.832 [2024-07-15 20:24:44.073901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.832 [2024-07-15 20:24:44.073907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.832 [2024-07-15 20:24:44.073922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.832 qpair failed and we were unable to recover it. 00:29:46.832 [2024-07-15 20:24:44.083955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.833 [2024-07-15 20:24:44.084080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.833 [2024-07-15 20:24:44.084096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.833 [2024-07-15 20:24:44.084103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.833 [2024-07-15 20:24:44.084110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.833 [2024-07-15 20:24:44.084129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.833 qpair failed and we were unable to recover it. 00:29:46.833 [2024-07-15 20:24:44.093934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.833 [2024-07-15 20:24:44.094014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.833 [2024-07-15 20:24:44.094030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.833 [2024-07-15 20:24:44.094037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.833 [2024-07-15 20:24:44.094043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.833 [2024-07-15 20:24:44.094058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.833 qpair failed and we were unable to recover it. 00:29:46.833 [2024-07-15 20:24:44.103994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.833 [2024-07-15 20:24:44.104069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.833 [2024-07-15 20:24:44.104085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.833 [2024-07-15 20:24:44.104092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.833 [2024-07-15 20:24:44.104098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.833 [2024-07-15 20:24:44.104113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.833 qpair failed and we were unable to recover it. 00:29:46.833 [2024-07-15 20:24:44.114024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.833 [2024-07-15 20:24:44.114100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.833 [2024-07-15 20:24:44.114116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.833 [2024-07-15 20:24:44.114129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.833 [2024-07-15 20:24:44.114136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.833 [2024-07-15 20:24:44.114151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.833 qpair failed and we were unable to recover it. 00:29:46.833 [2024-07-15 20:24:44.124069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.833 [2024-07-15 20:24:44.124156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.833 [2024-07-15 20:24:44.124172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.833 [2024-07-15 20:24:44.124179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.833 [2024-07-15 20:24:44.124185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.833 [2024-07-15 20:24:44.124199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.833 qpair failed and we were unable to recover it. 00:29:46.833 [2024-07-15 20:24:44.134043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.833 [2024-07-15 20:24:44.134128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.833 [2024-07-15 20:24:44.134144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.833 [2024-07-15 20:24:44.134155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.833 [2024-07-15 20:24:44.134161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.833 [2024-07-15 20:24:44.134176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.833 qpair failed and we were unable to recover it. 00:29:46.833 [2024-07-15 20:24:44.144080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.833 [2024-07-15 20:24:44.144162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.833 [2024-07-15 20:24:44.144178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.833 [2024-07-15 20:24:44.144186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.833 [2024-07-15 20:24:44.144192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.833 [2024-07-15 20:24:44.144207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.833 qpair failed and we were unable to recover it. 00:29:46.833 [2024-07-15 20:24:44.154129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.833 [2024-07-15 20:24:44.154207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.833 [2024-07-15 20:24:44.154223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.833 [2024-07-15 20:24:44.154230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.833 [2024-07-15 20:24:44.154236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.833 [2024-07-15 20:24:44.154251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.833 qpair failed and we were unable to recover it. 00:29:46.833 [2024-07-15 20:24:44.164206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.833 [2024-07-15 20:24:44.164292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.833 [2024-07-15 20:24:44.164308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.833 [2024-07-15 20:24:44.164315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.833 [2024-07-15 20:24:44.164321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.833 [2024-07-15 20:24:44.164336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.833 qpair failed and we were unable to recover it. 00:29:46.833 [2024-07-15 20:24:44.174080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.833 [2024-07-15 20:24:44.174192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.833 [2024-07-15 20:24:44.174208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.833 [2024-07-15 20:24:44.174215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.833 [2024-07-15 20:24:44.174221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.833 [2024-07-15 20:24:44.174237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.833 qpair failed and we were unable to recover it. 00:29:46.833 [2024-07-15 20:24:44.184225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.833 [2024-07-15 20:24:44.184302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.833 [2024-07-15 20:24:44.184318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.833 [2024-07-15 20:24:44.184325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.833 [2024-07-15 20:24:44.184331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.833 [2024-07-15 20:24:44.184346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.833 qpair failed and we were unable to recover it. 00:29:46.833 [2024-07-15 20:24:44.194181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.833 [2024-07-15 20:24:44.194274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.833 [2024-07-15 20:24:44.194289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.833 [2024-07-15 20:24:44.194296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.833 [2024-07-15 20:24:44.194302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.833 [2024-07-15 20:24:44.194317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.833 qpair failed and we were unable to recover it. 00:29:46.833 [2024-07-15 20:24:44.204329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.833 [2024-07-15 20:24:44.204413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.833 [2024-07-15 20:24:44.204429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.833 [2024-07-15 20:24:44.204435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.833 [2024-07-15 20:24:44.204441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.833 [2024-07-15 20:24:44.204455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.833 qpair failed and we were unable to recover it. 00:29:46.833 [2024-07-15 20:24:44.214257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.833 [2024-07-15 20:24:44.214336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.833 [2024-07-15 20:24:44.214353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.833 [2024-07-15 20:24:44.214360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.833 [2024-07-15 20:24:44.214366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.833 [2024-07-15 20:24:44.214381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.833 qpair failed and we were unable to recover it. 00:29:46.833 [2024-07-15 20:24:44.224342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.833 [2024-07-15 20:24:44.224504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.833 [2024-07-15 20:24:44.224527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.833 [2024-07-15 20:24:44.224534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.834 [2024-07-15 20:24:44.224540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.834 [2024-07-15 20:24:44.224555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.834 qpair failed and we were unable to recover it. 00:29:46.834 [2024-07-15 20:24:44.234318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.834 [2024-07-15 20:24:44.234396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.834 [2024-07-15 20:24:44.234411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.834 [2024-07-15 20:24:44.234418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.834 [2024-07-15 20:24:44.234424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.834 [2024-07-15 20:24:44.234439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.834 qpair failed and we were unable to recover it. 00:29:46.834 [2024-07-15 20:24:44.244478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.834 [2024-07-15 20:24:44.244594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.834 [2024-07-15 20:24:44.244611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.834 [2024-07-15 20:24:44.244618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.834 [2024-07-15 20:24:44.244624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.834 [2024-07-15 20:24:44.244639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.834 qpair failed and we were unable to recover it. 00:29:46.834 [2024-07-15 20:24:44.254381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.834 [2024-07-15 20:24:44.254489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.834 [2024-07-15 20:24:44.254505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.834 [2024-07-15 20:24:44.254512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.834 [2024-07-15 20:24:44.254518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:46.834 [2024-07-15 20:24:44.254533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.834 qpair failed and we were unable to recover it. 00:29:47.096 [2024-07-15 20:24:44.264446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.096 [2024-07-15 20:24:44.264522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.096 [2024-07-15 20:24:44.264537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.096 [2024-07-15 20:24:44.264544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.096 [2024-07-15 20:24:44.264551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.096 [2024-07-15 20:24:44.264569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.096 qpair failed and we were unable to recover it. 00:29:47.096 [2024-07-15 20:24:44.274438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.096 [2024-07-15 20:24:44.274514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.096 [2024-07-15 20:24:44.274530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.096 [2024-07-15 20:24:44.274537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.096 [2024-07-15 20:24:44.274543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.096 [2024-07-15 20:24:44.274557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.096 qpair failed and we were unable to recover it. 00:29:47.096 [2024-07-15 20:24:44.284498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.096 [2024-07-15 20:24:44.284587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.096 [2024-07-15 20:24:44.284603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.096 [2024-07-15 20:24:44.284610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.096 [2024-07-15 20:24:44.284615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.096 [2024-07-15 20:24:44.284630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.096 qpair failed and we were unable to recover it. 00:29:47.096 [2024-07-15 20:24:44.294550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.096 [2024-07-15 20:24:44.294674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.096 [2024-07-15 20:24:44.294689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.096 [2024-07-15 20:24:44.294696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.096 [2024-07-15 20:24:44.294702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.096 [2024-07-15 20:24:44.294717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.096 qpair failed and we were unable to recover it. 00:29:47.096 [2024-07-15 20:24:44.304510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.096 [2024-07-15 20:24:44.304582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.096 [2024-07-15 20:24:44.304598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.096 [2024-07-15 20:24:44.304605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.096 [2024-07-15 20:24:44.304611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.096 [2024-07-15 20:24:44.304625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.096 qpair failed and we were unable to recover it. 00:29:47.096 [2024-07-15 20:24:44.314577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.096 [2024-07-15 20:24:44.314659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.096 [2024-07-15 20:24:44.314678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.096 [2024-07-15 20:24:44.314685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.096 [2024-07-15 20:24:44.314692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.096 [2024-07-15 20:24:44.314706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.096 qpair failed and we were unable to recover it. 00:29:47.096 [2024-07-15 20:24:44.324622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.096 [2024-07-15 20:24:44.324712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.096 [2024-07-15 20:24:44.324728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.096 [2024-07-15 20:24:44.324735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.096 [2024-07-15 20:24:44.324741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.096 [2024-07-15 20:24:44.324755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.096 qpair failed and we were unable to recover it. 00:29:47.096 [2024-07-15 20:24:44.334608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.096 [2024-07-15 20:24:44.334692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.096 [2024-07-15 20:24:44.334717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.096 [2024-07-15 20:24:44.334726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.096 [2024-07-15 20:24:44.334732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.096 [2024-07-15 20:24:44.334751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.096 qpair failed and we were unable to recover it. 00:29:47.096 [2024-07-15 20:24:44.344656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.096 [2024-07-15 20:24:44.344730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.096 [2024-07-15 20:24:44.344749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.096 [2024-07-15 20:24:44.344756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.096 [2024-07-15 20:24:44.344762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.096 [2024-07-15 20:24:44.344778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.096 qpair failed and we were unable to recover it. 00:29:47.097 [2024-07-15 20:24:44.354688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.097 [2024-07-15 20:24:44.354767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.097 [2024-07-15 20:24:44.354783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.097 [2024-07-15 20:24:44.354790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.097 [2024-07-15 20:24:44.354801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.097 [2024-07-15 20:24:44.354816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.097 qpair failed and we were unable to recover it. 00:29:47.097 [2024-07-15 20:24:44.364784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.097 [2024-07-15 20:24:44.364896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.097 [2024-07-15 20:24:44.364912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.097 [2024-07-15 20:24:44.364919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.097 [2024-07-15 20:24:44.364925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.097 [2024-07-15 20:24:44.364940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.097 qpair failed and we were unable to recover it. 00:29:47.097 [2024-07-15 20:24:44.374731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.097 [2024-07-15 20:24:44.374809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.097 [2024-07-15 20:24:44.374825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.097 [2024-07-15 20:24:44.374832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.097 [2024-07-15 20:24:44.374838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.097 [2024-07-15 20:24:44.374853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.097 qpair failed and we were unable to recover it. 00:29:47.097 [2024-07-15 20:24:44.384734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.097 [2024-07-15 20:24:44.384809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.097 [2024-07-15 20:24:44.384825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.097 [2024-07-15 20:24:44.384832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.097 [2024-07-15 20:24:44.384838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.097 [2024-07-15 20:24:44.384852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.097 qpair failed and we were unable to recover it. 00:29:47.097 [2024-07-15 20:24:44.394764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.097 [2024-07-15 20:24:44.394838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.097 [2024-07-15 20:24:44.394854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.097 [2024-07-15 20:24:44.394861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.097 [2024-07-15 20:24:44.394867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.097 [2024-07-15 20:24:44.394882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.097 qpair failed and we were unable to recover it. 00:29:47.097 [2024-07-15 20:24:44.404838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.097 [2024-07-15 20:24:44.404929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.097 [2024-07-15 20:24:44.404945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.097 [2024-07-15 20:24:44.404952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.097 [2024-07-15 20:24:44.404958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.097 [2024-07-15 20:24:44.404973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.097 qpair failed and we were unable to recover it. 00:29:47.097 [2024-07-15 20:24:44.414848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.097 [2024-07-15 20:24:44.414933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.097 [2024-07-15 20:24:44.414950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.097 [2024-07-15 20:24:44.414956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.097 [2024-07-15 20:24:44.414962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.097 [2024-07-15 20:24:44.414977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.097 qpair failed and we were unable to recover it. 00:29:47.097 [2024-07-15 20:24:44.424798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.097 [2024-07-15 20:24:44.424884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.097 [2024-07-15 20:24:44.424900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.097 [2024-07-15 20:24:44.424907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.097 [2024-07-15 20:24:44.424913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.097 [2024-07-15 20:24:44.424928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.097 qpair failed and we were unable to recover it. 00:29:47.097 [2024-07-15 20:24:44.434870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.097 [2024-07-15 20:24:44.434951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.097 [2024-07-15 20:24:44.434967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.097 [2024-07-15 20:24:44.434974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.097 [2024-07-15 20:24:44.434980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.097 [2024-07-15 20:24:44.434995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.097 qpair failed and we were unable to recover it. 00:29:47.097 [2024-07-15 20:24:44.444933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.097 [2024-07-15 20:24:44.445011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.097 [2024-07-15 20:24:44.445027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.097 [2024-07-15 20:24:44.445038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.097 [2024-07-15 20:24:44.445044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.097 [2024-07-15 20:24:44.445059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.097 qpair failed and we were unable to recover it. 00:29:47.097 [2024-07-15 20:24:44.454944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.097 [2024-07-15 20:24:44.455016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.097 [2024-07-15 20:24:44.455033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.097 [2024-07-15 20:24:44.455040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.097 [2024-07-15 20:24:44.455047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.098 [2024-07-15 20:24:44.455062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.098 qpair failed and we were unable to recover it. 00:29:47.098 [2024-07-15 20:24:44.464980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.098 [2024-07-15 20:24:44.465053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.098 [2024-07-15 20:24:44.465069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.098 [2024-07-15 20:24:44.465076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.098 [2024-07-15 20:24:44.465082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.098 [2024-07-15 20:24:44.465096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.098 qpair failed and we were unable to recover it. 00:29:47.098 [2024-07-15 20:24:44.475007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.098 [2024-07-15 20:24:44.475085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.098 [2024-07-15 20:24:44.475102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.098 [2024-07-15 20:24:44.475109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.098 [2024-07-15 20:24:44.475115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.098 [2024-07-15 20:24:44.475133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.098 qpair failed and we were unable to recover it. 00:29:47.098 [2024-07-15 20:24:44.485083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.098 [2024-07-15 20:24:44.485185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.098 [2024-07-15 20:24:44.485201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.098 [2024-07-15 20:24:44.485208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.098 [2024-07-15 20:24:44.485214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.098 [2024-07-15 20:24:44.485228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.098 qpair failed and we were unable to recover it. 00:29:47.098 [2024-07-15 20:24:44.495043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.098 [2024-07-15 20:24:44.495156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.098 [2024-07-15 20:24:44.495172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.098 [2024-07-15 20:24:44.495179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.098 [2024-07-15 20:24:44.495185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.098 [2024-07-15 20:24:44.495201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.098 qpair failed and we were unable to recover it. 00:29:47.098 [2024-07-15 20:24:44.505048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.098 [2024-07-15 20:24:44.505128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.098 [2024-07-15 20:24:44.505144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.098 [2024-07-15 20:24:44.505152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.098 [2024-07-15 20:24:44.505158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.098 [2024-07-15 20:24:44.505172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.098 qpair failed and we were unable to recover it. 00:29:47.098 [2024-07-15 20:24:44.515090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.098 [2024-07-15 20:24:44.515171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.098 [2024-07-15 20:24:44.515187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.098 [2024-07-15 20:24:44.515195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.098 [2024-07-15 20:24:44.515201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.098 [2024-07-15 20:24:44.515216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.098 qpair failed and we were unable to recover it. 00:29:47.098 [2024-07-15 20:24:44.525224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.098 [2024-07-15 20:24:44.525307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.098 [2024-07-15 20:24:44.525323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.098 [2024-07-15 20:24:44.525330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.098 [2024-07-15 20:24:44.525336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.098 [2024-07-15 20:24:44.525351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.098 qpair failed and we were unable to recover it. 00:29:47.360 [2024-07-15 20:24:44.535171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.360 [2024-07-15 20:24:44.535245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.360 [2024-07-15 20:24:44.535260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.360 [2024-07-15 20:24:44.535271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.360 [2024-07-15 20:24:44.535277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.360 [2024-07-15 20:24:44.535292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.360 qpair failed and we were unable to recover it. 00:29:47.360 [2024-07-15 20:24:44.545167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.360 [2024-07-15 20:24:44.545256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.361 [2024-07-15 20:24:44.545272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.361 [2024-07-15 20:24:44.545278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.361 [2024-07-15 20:24:44.545284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.361 [2024-07-15 20:24:44.545299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.361 qpair failed and we were unable to recover it. 00:29:47.361 [2024-07-15 20:24:44.555197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.361 [2024-07-15 20:24:44.555274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.361 [2024-07-15 20:24:44.555290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.361 [2024-07-15 20:24:44.555297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.361 [2024-07-15 20:24:44.555303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.361 [2024-07-15 20:24:44.555317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.361 qpair failed and we were unable to recover it. 00:29:47.361 [2024-07-15 20:24:44.565261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.361 [2024-07-15 20:24:44.565346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.361 [2024-07-15 20:24:44.565361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.361 [2024-07-15 20:24:44.565368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.361 [2024-07-15 20:24:44.565374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.361 [2024-07-15 20:24:44.565389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.361 qpair failed and we were unable to recover it. 00:29:47.361 [2024-07-15 20:24:44.575269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.361 [2024-07-15 20:24:44.575350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.361 [2024-07-15 20:24:44.575366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.361 [2024-07-15 20:24:44.575373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.361 [2024-07-15 20:24:44.575379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.361 [2024-07-15 20:24:44.575394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.361 qpair failed and we were unable to recover it. 00:29:47.361 [2024-07-15 20:24:44.585298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.361 [2024-07-15 20:24:44.585376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.361 [2024-07-15 20:24:44.585392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.361 [2024-07-15 20:24:44.585399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.361 [2024-07-15 20:24:44.585404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.361 [2024-07-15 20:24:44.585419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.361 qpair failed and we were unable to recover it. 00:29:47.361 [2024-07-15 20:24:44.595339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.361 [2024-07-15 20:24:44.595421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.361 [2024-07-15 20:24:44.595438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.361 [2024-07-15 20:24:44.595444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.361 [2024-07-15 20:24:44.595450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.361 [2024-07-15 20:24:44.595465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.361 qpair failed and we were unable to recover it. 00:29:47.361 [2024-07-15 20:24:44.605361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.361 [2024-07-15 20:24:44.605444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.361 [2024-07-15 20:24:44.605460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.361 [2024-07-15 20:24:44.605467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.361 [2024-07-15 20:24:44.605473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.361 [2024-07-15 20:24:44.605487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.361 qpair failed and we were unable to recover it. 00:29:47.361 [2024-07-15 20:24:44.615385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.361 [2024-07-15 20:24:44.615500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.361 [2024-07-15 20:24:44.615516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.361 [2024-07-15 20:24:44.615523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.361 [2024-07-15 20:24:44.615528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.361 [2024-07-15 20:24:44.615543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.361 qpair failed and we were unable to recover it. 00:29:47.361 [2024-07-15 20:24:44.625377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.361 [2024-07-15 20:24:44.625449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.361 [2024-07-15 20:24:44.625468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.361 [2024-07-15 20:24:44.625475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.361 [2024-07-15 20:24:44.625481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.361 [2024-07-15 20:24:44.625495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.361 qpair failed and we were unable to recover it. 00:29:47.361 [2024-07-15 20:24:44.635412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.361 [2024-07-15 20:24:44.635489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.361 [2024-07-15 20:24:44.635505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.361 [2024-07-15 20:24:44.635512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.361 [2024-07-15 20:24:44.635518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.361 [2024-07-15 20:24:44.635532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.361 qpair failed and we were unable to recover it. 00:29:47.361 [2024-07-15 20:24:44.645487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.361 [2024-07-15 20:24:44.645570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.361 [2024-07-15 20:24:44.645586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.361 [2024-07-15 20:24:44.645593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.361 [2024-07-15 20:24:44.645599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.361 [2024-07-15 20:24:44.645614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.361 qpair failed and we were unable to recover it. 00:29:47.361 [2024-07-15 20:24:44.655487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.361 [2024-07-15 20:24:44.655563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.361 [2024-07-15 20:24:44.655579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.361 [2024-07-15 20:24:44.655586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.361 [2024-07-15 20:24:44.655592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.361 [2024-07-15 20:24:44.655607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.361 qpair failed and we were unable to recover it. 00:29:47.361 [2024-07-15 20:24:44.665528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.361 [2024-07-15 20:24:44.665606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.361 [2024-07-15 20:24:44.665622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.361 [2024-07-15 20:24:44.665629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.361 [2024-07-15 20:24:44.665635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.361 [2024-07-15 20:24:44.665652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.361 qpair failed and we were unable to recover it. 00:29:47.361 [2024-07-15 20:24:44.675535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.361 [2024-07-15 20:24:44.675661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.361 [2024-07-15 20:24:44.675677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.361 [2024-07-15 20:24:44.675684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.361 [2024-07-15 20:24:44.675690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.361 [2024-07-15 20:24:44.675704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.361 qpair failed and we were unable to recover it. 00:29:47.362 [2024-07-15 20:24:44.685623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.362 [2024-07-15 20:24:44.685711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.362 [2024-07-15 20:24:44.685727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.362 [2024-07-15 20:24:44.685734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.362 [2024-07-15 20:24:44.685740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.362 [2024-07-15 20:24:44.685755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.362 qpair failed and we were unable to recover it. 00:29:47.362 [2024-07-15 20:24:44.695580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.362 [2024-07-15 20:24:44.695656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.362 [2024-07-15 20:24:44.695672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.362 [2024-07-15 20:24:44.695679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.362 [2024-07-15 20:24:44.695685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.362 [2024-07-15 20:24:44.695700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.362 qpair failed and we were unable to recover it. 00:29:47.362 [2024-07-15 20:24:44.705609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.362 [2024-07-15 20:24:44.705737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.362 [2024-07-15 20:24:44.705763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.362 [2024-07-15 20:24:44.705771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.362 [2024-07-15 20:24:44.705778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.362 [2024-07-15 20:24:44.705797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.362 qpair failed and we were unable to recover it. 00:29:47.362 [2024-07-15 20:24:44.715658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.362 [2024-07-15 20:24:44.715741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.362 [2024-07-15 20:24:44.715771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.362 [2024-07-15 20:24:44.715780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.362 [2024-07-15 20:24:44.715786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.362 [2024-07-15 20:24:44.715806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.362 qpair failed and we were unable to recover it. 00:29:47.362 [2024-07-15 20:24:44.725721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.362 [2024-07-15 20:24:44.725818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.362 [2024-07-15 20:24:44.725843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.362 [2024-07-15 20:24:44.725852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.362 [2024-07-15 20:24:44.725859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.362 [2024-07-15 20:24:44.725878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.362 qpair failed and we were unable to recover it. 00:29:47.362 [2024-07-15 20:24:44.735697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.362 [2024-07-15 20:24:44.735781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.362 [2024-07-15 20:24:44.735807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.362 [2024-07-15 20:24:44.735817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.362 [2024-07-15 20:24:44.735824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.362 [2024-07-15 20:24:44.735843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.362 qpair failed and we were unable to recover it. 00:29:47.362 [2024-07-15 20:24:44.745752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.362 [2024-07-15 20:24:44.745847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.362 [2024-07-15 20:24:44.745873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.362 [2024-07-15 20:24:44.745883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.362 [2024-07-15 20:24:44.745891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.362 [2024-07-15 20:24:44.745910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.362 qpair failed and we were unable to recover it. 00:29:47.362 [2024-07-15 20:24:44.755824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.362 [2024-07-15 20:24:44.755930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.362 [2024-07-15 20:24:44.755947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.362 [2024-07-15 20:24:44.755955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.362 [2024-07-15 20:24:44.755966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.362 [2024-07-15 20:24:44.755982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.362 qpair failed and we were unable to recover it. 00:29:47.362 [2024-07-15 20:24:44.765759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.362 [2024-07-15 20:24:44.765863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.362 [2024-07-15 20:24:44.765879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.362 [2024-07-15 20:24:44.765886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.362 [2024-07-15 20:24:44.765893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e78000b90 00:29:47.362 [2024-07-15 20:24:44.765908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.362 qpair failed and we were unable to recover it. 00:29:47.362 [2024-07-15 20:24:44.775881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.362 [2024-07-15 20:24:44.776087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.362 [2024-07-15 20:24:44.776164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.362 [2024-07-15 20:24:44.776189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.362 [2024-07-15 20:24:44.776210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e88000b90 00:29:47.362 [2024-07-15 20:24:44.776263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.362 qpair failed and we were unable to recover it. 00:29:47.362 [2024-07-15 20:24:44.785890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.362 [2024-07-15 20:24:44.786023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.362 [2024-07-15 20:24:44.786057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.362 [2024-07-15 20:24:44.786073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.362 [2024-07-15 20:24:44.786086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e88000b90 00:29:47.362 [2024-07-15 20:24:44.786117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.362 qpair failed and we were unable to recover it. 00:29:47.362 Read completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Read completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Read completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Read completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Read completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Read completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Read completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Read completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Read completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Read completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Read completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Read completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Write completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Write completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Write completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Write completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Write completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Read completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Read completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Read completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Read completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Read completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.362 Write completed with error (sct=0, sc=8) 00:29:47.362 starting I/O failed 00:29:47.363 Write completed with error (sct=0, sc=8) 00:29:47.363 starting I/O failed 00:29:47.363 Write completed with error (sct=0, sc=8) 00:29:47.363 starting I/O failed 00:29:47.363 Read completed with error (sct=0, sc=8) 00:29:47.363 starting I/O failed 00:29:47.363 Read completed with error (sct=0, sc=8) 00:29:47.363 starting I/O failed 00:29:47.363 Write completed with error (sct=0, sc=8) 00:29:47.363 starting I/O failed 00:29:47.363 Read completed with error (sct=0, sc=8) 00:29:47.363 starting I/O failed 00:29:47.363 Write completed with error (sct=0, sc=8) 00:29:47.363 starting I/O failed 00:29:47.363 Write completed with error (sct=0, sc=8) 00:29:47.363 starting I/O failed 00:29:47.363 Write completed with error (sct=0, sc=8) 00:29:47.363 starting I/O failed 00:29:47.363 [2024-07-15 20:24:44.786457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.624 [2024-07-15 20:24:44.795883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.624 [2024-07-15 20:24:44.795947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.624 [2024-07-15 20:24:44.795963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.624 [2024-07-15 20:24:44.795969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.624 [2024-07-15 20:24:44.795974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e80000b90 00:29:47.624 [2024-07-15 20:24:44.795987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.624 qpair failed and we were unable to recover it. 00:29:47.624 [2024-07-15 20:24:44.805889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.624 [2024-07-15 20:24:44.805957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.624 [2024-07-15 20:24:44.805970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.624 [2024-07-15 20:24:44.805978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.624 [2024-07-15 20:24:44.805983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9e80000b90 00:29:47.624 [2024-07-15 20:24:44.805995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.624 qpair failed and we were unable to recover it. 00:29:47.624 [2024-07-15 20:24:44.806241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a0f20 is same with the state(5) to be set 00:29:47.624 [2024-07-15 20:24:44.815885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.624 [2024-07-15 20:24:44.815969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.624 [2024-07-15 20:24:44.815994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.624 [2024-07-15 20:24:44.816003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.624 [2024-07-15 20:24:44.816011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1593220 00:29:47.625 [2024-07-15 20:24:44.816032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.625 qpair failed and we were unable to recover it. 00:29:47.625 [2024-07-15 20:24:44.825942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.625 [2024-07-15 20:24:44.826024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.625 [2024-07-15 20:24:44.826043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.625 [2024-07-15 20:24:44.826051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.625 [2024-07-15 20:24:44.826058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1593220 00:29:47.625 [2024-07-15 20:24:44.826073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.625 qpair failed and we were unable to recover it. 00:29:47.625 [2024-07-15 20:24:44.826580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a0f20 (9): Bad file descriptor 00:29:47.625 Initializing NVMe Controllers 00:29:47.625 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:47.625 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:47.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:47.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:47.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:47.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:47.625 Initialization complete. Launching workers. 00:29:47.625 Starting thread on core 1 00:29:47.625 Starting thread on core 2 00:29:47.625 Starting thread on core 3 00:29:47.625 Starting thread on core 0 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:47.625 00:29:47.625 real 0m11.276s 00:29:47.625 user 0m20.353s 00:29:47.625 sys 0m4.278s 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:47.625 ************************************ 00:29:47.625 END TEST nvmf_target_disconnect_tc2 00:29:47.625 ************************************ 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:47.625 rmmod nvme_tcp 00:29:47.625 rmmod nvme_fabrics 00:29:47.625 rmmod nvme_keyring 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1178879 ']' 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1178879 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1178879 ']' 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1178879 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:47.625 20:24:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1178879 00:29:47.625 20:24:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:29:47.625 20:24:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:29:47.625 20:24:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1178879' 00:29:47.625 killing process with pid 1178879 00:29:47.625 20:24:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1178879 00:29:47.625 20:24:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1178879 00:29:47.886 20:24:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:47.886 20:24:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:47.886 20:24:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:47.886 20:24:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:47.886 20:24:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:47.886 20:24:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.886 20:24:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:47.886 20:24:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.798 20:24:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:49.798 00:29:49.798 real 0m21.166s 00:29:49.798 user 0m47.716s 00:29:49.798 sys 0m10.012s 00:29:49.798 20:24:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:49.798 20:24:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:49.798 ************************************ 00:29:49.798 END TEST nvmf_target_disconnect 00:29:49.798 ************************************ 00:29:50.058 20:24:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:50.058 20:24:47 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:29:50.058 20:24:47 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:50.058 20:24:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:50.058 20:24:47 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:29:50.058 00:29:50.058 real 22m39.233s 00:29:50.058 user 47m10.708s 00:29:50.058 sys 7m7.404s 00:29:50.058 20:24:47 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:50.058 20:24:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:50.058 ************************************ 00:29:50.058 END TEST nvmf_tcp 00:29:50.058 ************************************ 00:29:50.058 20:24:47 -- common/autotest_common.sh@1142 -- # return 0 00:29:50.058 20:24:47 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:50.058 20:24:47 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:50.058 20:24:47 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:50.058 20:24:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:50.058 20:24:47 -- common/autotest_common.sh@10 -- # set +x 00:29:50.058 ************************************ 00:29:50.058 START TEST spdkcli_nvmf_tcp 00:29:50.058 ************************************ 00:29:50.058 20:24:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:50.058 * Looking for test storage... 00:29:50.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:50.058 20:24:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:50.058 20:24:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:50.058 20:24:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:50.058 20:24:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.058 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.318 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:50.319 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:50.319 20:24:47 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:50.319 20:24:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:50.319 20:24:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:50.319 20:24:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:50.319 20:24:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:50.319 20:24:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:50.319 20:24:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:50.319 20:24:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:50.319 20:24:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1180711 00:29:50.319 20:24:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1180711 00:29:50.319 20:24:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1180711 ']' 00:29:50.319 20:24:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.319 20:24:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:50.319 20:24:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:50.319 20:24:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.319 20:24:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:50.319 20:24:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:50.319 [2024-07-15 20:24:47.575022] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:29:50.319 [2024-07-15 20:24:47.575093] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180711 ] 00:29:50.319 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.319 [2024-07-15 20:24:47.638641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:50.319 [2024-07-15 20:24:47.714244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.319 [2024-07-15 20:24:47.714405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.259 20:24:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:51.259 20:24:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:29:51.259 20:24:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:51.259 20:24:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:51.259 20:24:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:51.259 20:24:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:51.259 20:24:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:51.259 20:24:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:51.259 20:24:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:51.259 20:24:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:51.259 20:24:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:51.259 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:51.259 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:51.259 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:51.259 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:51.259 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:51.259 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:51.259 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:51.259 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:51.259 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:51.259 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:51.259 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:51.259 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:51.259 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:51.259 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:51.259 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:51.259 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:51.259 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:51.259 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:51.259 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:51.259 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:51.259 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:51.259 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:51.259 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:51.259 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:51.259 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:51.259 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:51.259 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:51.259 ' 00:29:53.842 [2024-07-15 20:24:50.720636] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.783 [2024-07-15 20:24:51.884426] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:56.696 [2024-07-15 20:24:54.026688] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:58.608 [2024-07-15 20:24:55.864198] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:59.993 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:59.993 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:59.993 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:59.993 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:59.993 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:59.993 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:59.993 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:59.993 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:59.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:59.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:59.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:59.993 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:59.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:59.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:59.993 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:59.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:59.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:59.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:59.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:59.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:59.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:59.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:59.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:59.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:59.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:59.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:59.993 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:59.993 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:59.993 20:24:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:59.993 20:24:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:59.993 20:24:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:00.254 20:24:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:00.254 20:24:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:00.254 20:24:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:00.254 20:24:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:00.254 20:24:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:00.516 20:24:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:00.516 20:24:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:00.516 20:24:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:00.516 20:24:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:00.516 20:24:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:00.516 20:24:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:00.516 20:24:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:00.516 20:24:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:00.516 20:24:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:00.516 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:00.516 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:00.516 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:00.516 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:00.516 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:00.516 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:00.516 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:00.516 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:00.516 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:00.516 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:00.516 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:00.516 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:00.516 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:00.516 ' 00:30:05.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:05.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:05.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:05.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:05.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:05.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:05.805 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:05.805 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:05.805 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:05.805 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:05.805 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:05.805 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:05.805 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:05.805 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1180711 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1180711 ']' 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1180711 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1180711 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1180711' 00:30:05.805 killing process with pid 1180711 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1180711 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1180711 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1180711 ']' 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1180711 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1180711 ']' 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1180711 00:30:05.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1180711) - No such process 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1180711 is not found' 00:30:05.805 Process with pid 1180711 is not found 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:05.805 00:30:05.805 real 0m15.580s 00:30:05.805 user 0m32.062s 00:30:05.805 sys 0m0.716s 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:05.805 20:25:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:05.805 ************************************ 00:30:05.805 END TEST spdkcli_nvmf_tcp 00:30:05.805 ************************************ 00:30:05.805 20:25:02 -- common/autotest_common.sh@1142 -- # return 0 00:30:05.805 20:25:02 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:05.805 20:25:02 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:05.805 20:25:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:05.805 20:25:02 -- common/autotest_common.sh@10 -- # set +x 00:30:05.805 ************************************ 00:30:05.805 START TEST nvmf_identify_passthru 00:30:05.805 ************************************ 00:30:05.805 20:25:03 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:05.805 * Looking for test storage... 00:30:05.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:05.805 20:25:03 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:05.805 20:25:03 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.805 20:25:03 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.805 20:25:03 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.805 20:25:03 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.805 20:25:03 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.805 20:25:03 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.805 20:25:03 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:05.805 20:25:03 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:05.805 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:05.805 20:25:03 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:05.805 20:25:03 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.805 20:25:03 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.805 20:25:03 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.805 20:25:03 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.805 20:25:03 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.805 20:25:03 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.806 20:25:03 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:05.806 20:25:03 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.806 20:25:03 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:05.806 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:05.806 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:05.806 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:05.806 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:05.806 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:05.806 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.806 20:25:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:05.806 20:25:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.806 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:05.806 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:05.806 20:25:03 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:05.806 20:25:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:13.949 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:13.949 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:13.949 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:13.949 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:13.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:30:13.949 00:30:13.949 --- 10.0.0.2 ping statistics --- 00:30:13.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.949 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:13.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:30:13.949 00:30:13.949 --- 10.0.0.1 ping statistics --- 00:30:13.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.949 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:13.949 20:25:10 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:13.949 20:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:13.949 20:25:10 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:13.949 20:25:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:13.949 20:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:13.949 20:25:10 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:13.949 20:25:10 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:13.949 20:25:10 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:13.949 20:25:10 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:13.949 20:25:10 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:13.949 20:25:10 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:13.949 20:25:10 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:13.949 20:25:10 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:13.949 20:25:10 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:13.949 20:25:10 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:13.950 20:25:10 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:30:13.950 20:25:10 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:30:13.950 20:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:13.950 20:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:13.950 20:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:13.950 20:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:13.950 20:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:13.950 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.950 20:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:30:13.950 20:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:13.950 20:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:13.950 20:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:13.950 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.211 20:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:14.211 20:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:14.211 20:25:11 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:14.211 20:25:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:14.211 20:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:14.211 20:25:11 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:14.211 20:25:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:14.211 20:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1187464 00:30:14.211 20:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:14.211 20:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1187464 00:30:14.211 20:25:11 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1187464 ']' 00:30:14.211 20:25:11 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.211 20:25:11 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:14.211 20:25:11 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.211 20:25:11 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:14.211 20:25:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:14.211 20:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:14.211 [2024-07-15 20:25:11.549306] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:30:14.211 [2024-07-15 20:25:11.549360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.211 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.211 [2024-07-15 20:25:11.614994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:14.471 [2024-07-15 20:25:11.683192] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.472 [2024-07-15 20:25:11.683229] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.472 [2024-07-15 20:25:11.683237] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.472 [2024-07-15 20:25:11.683247] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.472 [2024-07-15 20:25:11.683253] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.472 [2024-07-15 20:25:11.683333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.472 [2024-07-15 20:25:11.683466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.472 [2024-07-15 20:25:11.683621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.472 [2024-07-15 20:25:11.683623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:15.043 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:15.043 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:30:15.043 20:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:15.044 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.044 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:15.044 INFO: Log level set to 20 00:30:15.044 INFO: Requests: 00:30:15.044 { 00:30:15.044 "jsonrpc": "2.0", 00:30:15.044 "method": "nvmf_set_config", 00:30:15.044 "id": 1, 00:30:15.044 "params": { 00:30:15.044 "admin_cmd_passthru": { 00:30:15.044 "identify_ctrlr": true 00:30:15.044 } 00:30:15.044 } 00:30:15.044 } 00:30:15.044 00:30:15.044 INFO: response: 00:30:15.044 { 00:30:15.044 "jsonrpc": "2.0", 00:30:15.044 "id": 1, 00:30:15.044 "result": true 00:30:15.044 } 00:30:15.044 00:30:15.044 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.044 20:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:15.044 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.044 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:15.044 INFO: Setting log level to 20 00:30:15.044 INFO: Setting log level to 20 00:30:15.044 INFO: Log level set to 20 00:30:15.044 INFO: Log level set to 20 00:30:15.044 INFO: Requests: 00:30:15.044 { 00:30:15.044 "jsonrpc": "2.0", 00:30:15.044 "method": "framework_start_init", 00:30:15.044 "id": 1 00:30:15.044 } 00:30:15.044 00:30:15.044 INFO: Requests: 00:30:15.044 { 00:30:15.044 "jsonrpc": "2.0", 00:30:15.044 "method": "framework_start_init", 00:30:15.044 "id": 1 00:30:15.044 } 00:30:15.044 00:30:15.044 [2024-07-15 20:25:12.400546] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:15.044 INFO: response: 00:30:15.044 { 00:30:15.044 "jsonrpc": "2.0", 00:30:15.044 "id": 1, 00:30:15.044 "result": true 00:30:15.044 } 00:30:15.044 00:30:15.044 INFO: response: 00:30:15.044 { 00:30:15.044 "jsonrpc": "2.0", 00:30:15.044 "id": 1, 00:30:15.044 "result": true 00:30:15.044 } 00:30:15.044 00:30:15.044 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.044 20:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:15.044 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.044 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:15.044 INFO: Setting log level to 40 00:30:15.044 INFO: Setting log level to 40 00:30:15.044 INFO: Setting log level to 40 00:30:15.044 [2024-07-15 20:25:12.413871] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.044 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.044 20:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:15.044 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:15.044 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:15.044 20:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:15.044 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.044 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:15.615 Nvme0n1 00:30:15.615 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.615 20:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:15.615 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.615 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:15.615 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.615 20:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:15.615 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.615 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:15.615 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.616 20:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.616 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.616 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:15.616 [2024-07-15 20:25:12.797528] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.616 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.616 20:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:15.616 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.616 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:15.616 [ 00:30:15.616 { 00:30:15.616 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:15.616 "subtype": "Discovery", 00:30:15.616 "listen_addresses": [], 00:30:15.616 "allow_any_host": true, 00:30:15.616 "hosts": [] 00:30:15.616 }, 00:30:15.616 { 00:30:15.616 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:15.616 "subtype": "NVMe", 00:30:15.616 "listen_addresses": [ 00:30:15.616 { 00:30:15.616 "trtype": "TCP", 00:30:15.616 "adrfam": "IPv4", 00:30:15.616 "traddr": "10.0.0.2", 00:30:15.616 "trsvcid": "4420" 00:30:15.616 } 00:30:15.616 ], 00:30:15.616 "allow_any_host": true, 00:30:15.616 "hosts": [], 00:30:15.616 "serial_number": "SPDK00000000000001", 00:30:15.616 "model_number": "SPDK bdev Controller", 00:30:15.616 "max_namespaces": 1, 00:30:15.616 "min_cntlid": 1, 00:30:15.616 "max_cntlid": 65519, 00:30:15.616 "namespaces": [ 00:30:15.616 { 00:30:15.616 "nsid": 1, 00:30:15.616 "bdev_name": "Nvme0n1", 00:30:15.616 "name": "Nvme0n1", 00:30:15.616 "nguid": "36344730526054870025384500000044", 00:30:15.616 "uuid": "36344730-5260-5487-0025-384500000044" 00:30:15.616 } 00:30:15.616 ] 00:30:15.616 } 00:30:15.616 ] 00:30:15.616 20:25:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.616 20:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:15.616 20:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:15.616 20:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:15.616 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.876 20:25:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:30:15.876 20:25:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:15.876 20:25:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:15.876 20:25:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:15.876 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.136 20:25:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:16.136 20:25:13 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:30:16.136 20:25:13 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:16.136 20:25:13 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:16.136 20:25:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.136 20:25:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:16.136 20:25:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.136 20:25:13 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:16.136 20:25:13 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:16.136 20:25:13 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:16.136 20:25:13 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:16.136 20:25:13 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:16.136 20:25:13 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:16.136 20:25:13 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:16.136 20:25:13 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:16.136 rmmod nvme_tcp 00:30:16.136 rmmod nvme_fabrics 00:30:16.136 rmmod nvme_keyring 00:30:16.136 20:25:13 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:16.136 20:25:13 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:16.136 20:25:13 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:16.136 20:25:13 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1187464 ']' 00:30:16.136 20:25:13 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1187464 00:30:16.136 20:25:13 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1187464 ']' 00:30:16.136 20:25:13 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1187464 00:30:16.136 20:25:13 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:30:16.136 20:25:13 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:16.136 20:25:13 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1187464 00:30:16.136 20:25:13 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:16.136 20:25:13 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:16.136 20:25:13 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1187464' 00:30:16.136 killing process with pid 1187464 00:30:16.136 20:25:13 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1187464 00:30:16.136 20:25:13 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1187464 00:30:16.396 20:25:13 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:16.396 20:25:13 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:16.396 20:25:13 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:16.396 20:25:13 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:16.396 20:25:13 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:16.396 20:25:13 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.396 20:25:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:16.396 20:25:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.939 20:25:15 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:18.939 00:30:18.939 real 0m12.809s 00:30:18.939 user 0m10.606s 00:30:18.939 sys 0m6.110s 00:30:18.939 20:25:15 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:18.939 20:25:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:18.939 ************************************ 00:30:18.939 END TEST nvmf_identify_passthru 00:30:18.939 ************************************ 00:30:18.939 20:25:15 -- common/autotest_common.sh@1142 -- # return 0 00:30:18.939 20:25:15 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:18.939 20:25:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:18.939 20:25:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:18.939 20:25:15 -- common/autotest_common.sh@10 -- # set +x 00:30:18.939 ************************************ 00:30:18.939 START TEST nvmf_dif 00:30:18.939 ************************************ 00:30:18.939 20:25:15 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:18.939 * Looking for test storage... 00:30:18.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:18.939 20:25:16 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.939 20:25:16 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.939 20:25:16 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.939 20:25:16 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.939 20:25:16 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.939 20:25:16 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.939 20:25:16 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.939 20:25:16 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:18.939 20:25:16 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:18.939 20:25:16 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:18.939 20:25:16 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:18.939 20:25:16 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:18.939 20:25:16 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:18.939 20:25:16 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.939 20:25:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:18.939 20:25:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.939 20:25:16 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:18.940 20:25:16 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:18.940 20:25:16 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:18.940 20:25:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:25.594 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:25.594 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:25.594 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:25.594 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:25.594 20:25:22 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:25.594 20:25:23 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:25.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:25.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:30:25.594 00:30:25.594 --- 10.0.0.2 ping statistics --- 00:30:25.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.594 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:30:25.594 20:25:23 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:25.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:25.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.450 ms 00:30:25.855 00:30:25.855 --- 10.0.0.1 ping statistics --- 00:30:25.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.855 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:30:25.855 20:25:23 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:25.855 20:25:23 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:25.855 20:25:23 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:25.855 20:25:23 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:29.153 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:29.153 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:29.153 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:29.153 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:29.153 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:29.153 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:29.153 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:29.153 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:29.153 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:29.153 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:29.153 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:29.153 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:29.153 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:29.153 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:29.153 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:29.153 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:29.153 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:29.153 20:25:26 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:29.153 20:25:26 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:29.153 20:25:26 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:29.153 20:25:26 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:29.153 20:25:26 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:29.153 20:25:26 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:29.415 20:25:26 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:29.415 20:25:26 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:29.415 20:25:26 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:29.415 20:25:26 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:29.415 20:25:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:29.415 20:25:26 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1193488 00:30:29.415 20:25:26 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1193488 00:30:29.415 20:25:26 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:29.415 20:25:26 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1193488 ']' 00:30:29.415 20:25:26 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.415 20:25:26 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:29.415 20:25:26 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.415 20:25:26 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:29.415 20:25:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:29.415 [2024-07-15 20:25:26.651662] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:30:29.415 [2024-07-15 20:25:26.651724] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:29.415 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.415 [2024-07-15 20:25:26.721961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.415 [2024-07-15 20:25:26.795970] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:29.415 [2024-07-15 20:25:26.796008] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:29.415 [2024-07-15 20:25:26.796015] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:29.415 [2024-07-15 20:25:26.796022] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:29.415 [2024-07-15 20:25:26.796027] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:29.415 [2024-07-15 20:25:26.796047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.360 20:25:27 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:30.360 20:25:27 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:30:30.360 20:25:27 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:30.360 20:25:27 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:30.360 20:25:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:30.360 20:25:27 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:30.360 20:25:27 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:30.360 20:25:27 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:30.360 20:25:27 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.360 20:25:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:30.360 [2024-07-15 20:25:27.467045] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:30.360 20:25:27 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.360 20:25:27 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:30.360 20:25:27 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:30.360 20:25:27 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:30.360 20:25:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:30.360 ************************************ 00:30:30.360 START TEST fio_dif_1_default 00:30:30.360 ************************************ 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:30.360 bdev_null0 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:30.360 [2024-07-15 20:25:27.551364] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:30.360 20:25:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:30.361 { 00:30:30.361 "params": { 00:30:30.361 "name": "Nvme$subsystem", 00:30:30.361 "trtype": "$TEST_TRANSPORT", 00:30:30.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:30.361 "adrfam": "ipv4", 00:30:30.361 "trsvcid": "$NVMF_PORT", 00:30:30.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:30.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:30.361 "hdgst": ${hdgst:-false}, 00:30:30.361 "ddgst": ${ddgst:-false} 00:30:30.361 }, 00:30:30.361 "method": "bdev_nvme_attach_controller" 00:30:30.361 } 00:30:30.361 EOF 00:30:30.361 )") 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:30.361 "params": { 00:30:30.361 "name": "Nvme0", 00:30:30.361 "trtype": "tcp", 00:30:30.361 "traddr": "10.0.0.2", 00:30:30.361 "adrfam": "ipv4", 00:30:30.361 "trsvcid": "4420", 00:30:30.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:30.361 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:30.361 "hdgst": false, 00:30:30.361 "ddgst": false 00:30:30.361 }, 00:30:30.361 "method": "bdev_nvme_attach_controller" 00:30:30.361 }' 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:30.361 20:25:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:30.623 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:30.623 fio-3.35 00:30:30.623 Starting 1 thread 00:30:30.623 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.854 00:30:42.854 filename0: (groupid=0, jobs=1): err= 0: pid=1194074: Mon Jul 15 20:25:38 2024 00:30:42.854 read: IOPS=185, BW=743KiB/s (760kB/s)(7440KiB/10020msec) 00:30:42.854 slat (nsec): min=5397, max=31704, avg=6102.71, stdev=1334.88 00:30:42.854 clat (usec): min=1206, max=43773, avg=21531.48, stdev=20085.14 00:30:42.854 lat (usec): min=1214, max=43805, avg=21537.58, stdev=20085.14 00:30:42.854 clat percentiles (usec): 00:30:42.854 | 1.00th=[ 1303], 5.00th=[ 1369], 10.00th=[ 1385], 20.00th=[ 1418], 00:30:42.854 | 30.00th=[ 1418], 40.00th=[ 1434], 50.00th=[41681], 60.00th=[41681], 00:30:42.854 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:42.854 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:30:42.854 | 99.99th=[43779] 00:30:42.854 bw ( KiB/s): min= 704, max= 768, per=99.93%, avg=742.40, stdev=30.45, samples=20 00:30:42.854 iops : min= 176, max= 192, avg=185.60, stdev= 7.61, samples=20 00:30:42.854 lat (msec) : 2=49.89%, 50=50.11% 00:30:42.854 cpu : usr=95.57%, sys=4.24%, ctx=10, majf=0, minf=226 00:30:42.854 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:42.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.854 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:42.854 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:42.854 00:30:42.854 Run status group 0 (all jobs): 00:30:42.854 READ: bw=743KiB/s (760kB/s), 743KiB/s-743KiB/s (760kB/s-760kB/s), io=7440KiB (7619kB), run=10020-10020msec 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.854 00:30:42.854 real 0m11.216s 00:30:42.854 user 0m24.564s 00:30:42.854 sys 0m0.728s 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:42.854 ************************************ 00:30:42.854 END TEST fio_dif_1_default 00:30:42.854 ************************************ 00:30:42.854 20:25:38 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:42.854 20:25:38 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:42.854 20:25:38 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:42.854 20:25:38 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:42.854 20:25:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:42.854 ************************************ 00:30:42.854 START TEST fio_dif_1_multi_subsystems 00:30:42.854 ************************************ 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:42.854 bdev_null0 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:42.854 [2024-07-15 20:25:38.846276] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:42.854 bdev_null1 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:42.854 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.855 { 00:30:42.855 "params": { 00:30:42.855 "name": "Nvme$subsystem", 00:30:42.855 "trtype": "$TEST_TRANSPORT", 00:30:42.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.855 "adrfam": "ipv4", 00:30:42.855 "trsvcid": "$NVMF_PORT", 00:30:42.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.855 "hdgst": ${hdgst:-false}, 00:30:42.855 "ddgst": ${ddgst:-false} 00:30:42.855 }, 00:30:42.855 "method": "bdev_nvme_attach_controller" 00:30:42.855 } 00:30:42.855 EOF 00:30:42.855 )") 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.855 { 00:30:42.855 "params": { 00:30:42.855 "name": "Nvme$subsystem", 00:30:42.855 "trtype": "$TEST_TRANSPORT", 00:30:42.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.855 "adrfam": "ipv4", 00:30:42.855 "trsvcid": "$NVMF_PORT", 00:30:42.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.855 "hdgst": ${hdgst:-false}, 00:30:42.855 "ddgst": ${ddgst:-false} 00:30:42.855 }, 00:30:42.855 "method": "bdev_nvme_attach_controller" 00:30:42.855 } 00:30:42.855 EOF 00:30:42.855 )") 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:42.855 "params": { 00:30:42.855 "name": "Nvme0", 00:30:42.855 "trtype": "tcp", 00:30:42.855 "traddr": "10.0.0.2", 00:30:42.855 "adrfam": "ipv4", 00:30:42.855 "trsvcid": "4420", 00:30:42.855 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:42.855 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:42.855 "hdgst": false, 00:30:42.855 "ddgst": false 00:30:42.855 }, 00:30:42.855 "method": "bdev_nvme_attach_controller" 00:30:42.855 },{ 00:30:42.855 "params": { 00:30:42.855 "name": "Nvme1", 00:30:42.855 "trtype": "tcp", 00:30:42.855 "traddr": "10.0.0.2", 00:30:42.855 "adrfam": "ipv4", 00:30:42.855 "trsvcid": "4420", 00:30:42.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.855 "hdgst": false, 00:30:42.855 "ddgst": false 00:30:42.855 }, 00:30:42.855 "method": "bdev_nvme_attach_controller" 00:30:42.855 }' 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:42.855 20:25:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:42.855 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:42.855 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:42.855 fio-3.35 00:30:42.855 Starting 2 threads 00:30:42.855 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.847 00:30:52.847 filename0: (groupid=0, jobs=1): err= 0: pid=1196360: Mon Jul 15 20:25:50 2024 00:30:52.847 read: IOPS=185, BW=742KiB/s (760kB/s)(7440KiB/10026msec) 00:30:52.847 slat (nsec): min=5405, max=33493, avg=6180.93, stdev=1290.51 00:30:52.847 clat (usec): min=1228, max=43021, avg=21542.43, stdev=20093.75 00:30:52.847 lat (usec): min=1236, max=43055, avg=21548.61, stdev=20093.75 00:30:52.847 clat percentiles (usec): 00:30:52.847 | 1.00th=[ 1336], 5.00th=[ 1369], 10.00th=[ 1385], 20.00th=[ 1401], 00:30:52.847 | 30.00th=[ 1418], 40.00th=[ 1434], 50.00th=[41157], 60.00th=[41681], 00:30:52.847 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:52.847 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:30:52.847 | 99.99th=[43254] 00:30:52.847 bw ( KiB/s): min= 704, max= 768, per=66.14%, avg=742.40, stdev=32.17, samples=20 00:30:52.847 iops : min= 176, max= 192, avg=185.60, stdev= 8.04, samples=20 00:30:52.847 lat (msec) : 2=49.89%, 50=50.11% 00:30:52.847 cpu : usr=97.16%, sys=2.63%, ctx=14, majf=0, minf=165 00:30:52.847 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:52.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.847 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:52.847 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:52.847 filename1: (groupid=0, jobs=1): err= 0: pid=1196361: Mon Jul 15 20:25:50 2024 00:30:52.847 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10003msec) 00:30:52.847 slat (nsec): min=5397, max=38861, avg=6431.93, stdev=1853.34 00:30:52.847 clat (usec): min=41739, max=43562, avg=42010.27, stdev=181.18 00:30:52.847 lat (usec): min=41747, max=43600, avg=42016.70, stdev=181.71 00:30:52.847 clat percentiles (usec): 00:30:52.847 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:30:52.847 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:52.847 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:52.847 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:30:52.847 | 99.99th=[43779] 00:30:52.847 bw ( KiB/s): min= 352, max= 384, per=33.87%, avg=380.80, stdev= 9.85, samples=20 00:30:52.847 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:30:52.847 lat (msec) : 50=100.00% 00:30:52.847 cpu : usr=97.33%, sys=2.46%, ctx=34, majf=0, minf=104 00:30:52.847 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:52.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.847 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:52.847 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:52.847 00:30:52.847 Run status group 0 (all jobs): 00:30:52.847 READ: bw=1122KiB/s (1149kB/s), 381KiB/s-742KiB/s (390kB/s-760kB/s), io=11.0MiB (11.5MB), run=10003-10026msec 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.847 00:30:52.847 real 0m11.439s 00:30:52.847 user 0m34.777s 00:30:52.847 sys 0m0.825s 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:52.847 20:25:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:52.847 ************************************ 00:30:52.847 END TEST fio_dif_1_multi_subsystems 00:30:52.847 ************************************ 00:30:53.109 20:25:50 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:53.109 20:25:50 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:53.109 20:25:50 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:53.109 20:25:50 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:53.109 20:25:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:53.109 ************************************ 00:30:53.109 START TEST fio_dif_rand_params 00:30:53.109 ************************************ 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.109 bdev_null0 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.109 [2024-07-15 20:25:50.365772] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.109 { 00:30:53.109 "params": { 00:30:53.109 "name": "Nvme$subsystem", 00:30:53.109 "trtype": "$TEST_TRANSPORT", 00:30:53.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.109 "adrfam": "ipv4", 00:30:53.109 "trsvcid": "$NVMF_PORT", 00:30:53.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.109 "hdgst": ${hdgst:-false}, 00:30:53.109 "ddgst": ${ddgst:-false} 00:30:53.109 }, 00:30:53.109 "method": "bdev_nvme_attach_controller" 00:30:53.109 } 00:30:53.109 EOF 00:30:53.109 )") 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:53.109 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:53.110 "params": { 00:30:53.110 "name": "Nvme0", 00:30:53.110 "trtype": "tcp", 00:30:53.110 "traddr": "10.0.0.2", 00:30:53.110 "adrfam": "ipv4", 00:30:53.110 "trsvcid": "4420", 00:30:53.110 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:53.110 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:53.110 "hdgst": false, 00:30:53.110 "ddgst": false 00:30:53.110 }, 00:30:53.110 "method": "bdev_nvme_attach_controller" 00:30:53.110 }' 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:53.110 20:25:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.370 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:53.370 ... 00:30:53.370 fio-3.35 00:30:53.370 Starting 3 threads 00:30:53.630 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.957 00:30:58.957 filename0: (groupid=0, jobs=1): err= 0: pid=1198634: Mon Jul 15 20:25:56 2024 00:30:58.957 read: IOPS=147, BW=18.4MiB/s (19.3MB/s)(93.0MiB/5050msec) 00:30:58.957 slat (nsec): min=5452, max=32357, avg=8387.24, stdev=2101.57 00:30:58.957 clat (usec): min=6809, max=95667, avg=20342.47, stdev=20137.66 00:30:58.957 lat (usec): min=6818, max=95676, avg=20350.86, stdev=20137.61 00:30:58.957 clat percentiles (usec): 00:30:58.957 | 1.00th=[ 7046], 5.00th=[ 7635], 10.00th=[ 7898], 20.00th=[ 8586], 00:30:58.957 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10552], 60.00th=[11469], 00:30:58.957 | 70.00th=[12780], 80.00th=[50070], 90.00th=[52167], 95.00th=[53740], 00:30:58.957 | 99.00th=[93848], 99.50th=[93848], 99.90th=[95945], 99.95th=[95945], 00:30:58.957 | 99.99th=[95945] 00:30:58.957 bw ( KiB/s): min=13056, max=35840, per=36.03%, avg=18969.60, stdev=6692.49, samples=10 00:30:58.957 iops : min= 102, max= 280, avg=148.20, stdev=52.29, samples=10 00:30:58.957 lat (msec) : 10=40.59%, 20=37.10%, 50=2.82%, 100=19.49% 00:30:58.957 cpu : usr=96.26%, sys=3.47%, ctx=12, majf=0, minf=130 00:30:58.957 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.957 issued rwts: total=744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.957 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:58.957 filename0: (groupid=0, jobs=1): err= 0: pid=1198635: Mon Jul 15 20:25:56 2024 00:30:58.957 read: IOPS=120, BW=15.0MiB/s (15.7MB/s)(75.8MiB/5046msec) 00:30:58.957 slat (nsec): min=5442, max=33388, avg=8168.05, stdev=1654.40 00:30:58.957 clat (usec): min=6571, max=94588, avg=24898.14, stdev=22525.97 00:30:58.957 lat (usec): min=6579, max=94597, avg=24906.31, stdev=22525.73 00:30:58.957 clat percentiles (usec): 00:30:58.957 | 1.00th=[ 6783], 5.00th=[ 7504], 10.00th=[ 8094], 20.00th=[ 9241], 00:30:58.957 | 30.00th=[ 9896], 40.00th=[10814], 50.00th=[11994], 60.00th=[13042], 00:30:58.957 | 70.00th=[49546], 80.00th=[51643], 90.00th=[52691], 95.00th=[54264], 00:30:58.957 | 99.00th=[92799], 99.50th=[92799], 99.90th=[94897], 99.95th=[94897], 00:30:58.957 | 99.99th=[94897] 00:30:58.957 bw ( KiB/s): min= 8192, max=23040, per=29.32%, avg=15436.80, stdev=6149.98, samples=10 00:30:58.957 iops : min= 64, max= 180, avg=120.60, stdev=48.05, samples=10 00:30:58.957 lat (msec) : 10=30.53%, 20=37.95%, 50=3.80%, 100=27.72% 00:30:58.957 cpu : usr=96.77%, sys=2.89%, ctx=20, majf=0, minf=89 00:30:58.957 IO depths : 1=3.6%, 2=96.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.957 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.957 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:58.957 filename0: (groupid=0, jobs=1): err= 0: pid=1198636: Mon Jul 15 20:25:56 2024 00:30:58.957 read: IOPS=145, BW=18.2MiB/s (19.0MB/s)(90.9MiB/5003msec) 00:30:58.957 slat (nsec): min=5425, max=31696, avg=7757.08, stdev=1544.97 00:30:58.957 clat (msec): min=6, max=134, avg=20.63, stdev=19.89 00:30:58.957 lat (msec): min=6, max=134, avg=20.64, stdev=19.89 00:30:58.957 clat percentiles (msec): 00:30:58.957 | 1.00th=[ 8], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:30:58.957 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:30:58.957 | 70.00th=[ 14], 80.00th=[ 51], 90.00th=[ 53], 95.00th=[ 54], 00:30:58.957 | 99.00th=[ 92], 99.50th=[ 94], 99.90th=[ 136], 99.95th=[ 136], 00:30:58.957 | 99.99th=[ 136] 00:30:58.957 bw ( KiB/s): min=12544, max=26624, per=35.21%, avg=18537.30, stdev=4226.30, samples=10 00:30:58.957 iops : min= 98, max= 208, avg=144.80, stdev=33.04, samples=10 00:30:58.957 lat (msec) : 10=37.41%, 20=39.75%, 50=2.34%, 100=20.36%, 250=0.14% 00:30:58.957 cpu : usr=95.96%, sys=3.74%, ctx=8, majf=0, minf=68 00:30:58.957 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.957 issued rwts: total=727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.958 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:58.958 00:30:58.958 Run status group 0 (all jobs): 00:30:58.958 READ: bw=51.4MiB/s (53.9MB/s), 15.0MiB/s-18.4MiB/s (15.7MB/s-19.3MB/s), io=260MiB (272MB), run=5003-5050msec 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:59.220 bdev_null0 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:59.220 [2024-07-15 20:25:56.485252] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:59.220 bdev_null1 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:59.220 bdev_null2 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:59.220 { 00:30:59.220 "params": { 00:30:59.220 "name": "Nvme$subsystem", 00:30:59.220 "trtype": "$TEST_TRANSPORT", 00:30:59.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:59.220 "adrfam": "ipv4", 00:30:59.220 "trsvcid": "$NVMF_PORT", 00:30:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:59.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:59.220 "hdgst": ${hdgst:-false}, 00:30:59.220 "ddgst": ${ddgst:-false} 00:30:59.220 }, 00:30:59.220 "method": "bdev_nvme_attach_controller" 00:30:59.220 } 00:30:59.220 EOF 00:30:59.220 )") 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:59.220 { 00:30:59.220 "params": { 00:30:59.220 "name": "Nvme$subsystem", 00:30:59.220 "trtype": "$TEST_TRANSPORT", 00:30:59.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:59.220 "adrfam": "ipv4", 00:30:59.220 "trsvcid": "$NVMF_PORT", 00:30:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:59.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:59.220 "hdgst": ${hdgst:-false}, 00:30:59.220 "ddgst": ${ddgst:-false} 00:30:59.220 }, 00:30:59.220 "method": "bdev_nvme_attach_controller" 00:30:59.220 } 00:30:59.220 EOF 00:30:59.220 )") 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:59.220 { 00:30:59.220 "params": { 00:30:59.220 "name": "Nvme$subsystem", 00:30:59.220 "trtype": "$TEST_TRANSPORT", 00:30:59.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:59.220 "adrfam": "ipv4", 00:30:59.220 "trsvcid": "$NVMF_PORT", 00:30:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:59.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:59.220 "hdgst": ${hdgst:-false}, 00:30:59.220 "ddgst": ${ddgst:-false} 00:30:59.220 }, 00:30:59.220 "method": "bdev_nvme_attach_controller" 00:30:59.220 } 00:30:59.220 EOF 00:30:59.220 )") 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:59.220 20:25:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:59.220 "params": { 00:30:59.220 "name": "Nvme0", 00:30:59.220 "trtype": "tcp", 00:30:59.220 "traddr": "10.0.0.2", 00:30:59.220 "adrfam": "ipv4", 00:30:59.220 "trsvcid": "4420", 00:30:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:59.220 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:59.220 "hdgst": false, 00:30:59.220 "ddgst": false 00:30:59.220 }, 00:30:59.220 "method": "bdev_nvme_attach_controller" 00:30:59.220 },{ 00:30:59.221 "params": { 00:30:59.221 "name": "Nvme1", 00:30:59.221 "trtype": "tcp", 00:30:59.221 "traddr": "10.0.0.2", 00:30:59.221 "adrfam": "ipv4", 00:30:59.221 "trsvcid": "4420", 00:30:59.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:59.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:59.221 "hdgst": false, 00:30:59.221 "ddgst": false 00:30:59.221 }, 00:30:59.221 "method": "bdev_nvme_attach_controller" 00:30:59.221 },{ 00:30:59.221 "params": { 00:30:59.221 "name": "Nvme2", 00:30:59.221 "trtype": "tcp", 00:30:59.221 "traddr": "10.0.0.2", 00:30:59.221 "adrfam": "ipv4", 00:30:59.221 "trsvcid": "4420", 00:30:59.221 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:59.221 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:59.221 "hdgst": false, 00:30:59.221 "ddgst": false 00:30:59.221 }, 00:30:59.221 "method": "bdev_nvme_attach_controller" 00:30:59.221 }' 00:30:59.221 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:59.221 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:59.221 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:59.221 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:59.221 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:59.221 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:59.509 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:59.509 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:59.509 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:59.509 20:25:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:59.773 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:59.773 ... 00:30:59.773 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:59.773 ... 00:30:59.773 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:59.773 ... 00:30:59.773 fio-3.35 00:30:59.773 Starting 24 threads 00:30:59.773 EAL: No free 2048 kB hugepages reported on node 1 00:31:12.005 00:31:12.005 filename0: (groupid=0, jobs=1): err= 0: pid=1200062: Mon Jul 15 20:26:07 2024 00:31:12.005 read: IOPS=537, BW=2151KiB/s (2203kB/s)(21.1MiB/10026msec) 00:31:12.005 slat (nsec): min=5587, max=47802, avg=8915.65, stdev=4605.38 00:31:12.005 clat (usec): min=3217, max=37195, avg=29670.74, stdev=5576.77 00:31:12.005 lat (usec): min=3229, max=37204, avg=29679.65, stdev=5577.47 00:31:12.005 clat percentiles (usec): 00:31:12.005 | 1.00th=[11076], 5.00th=[18744], 10.00th=[20841], 20.00th=[23725], 00:31:12.005 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:12.005 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:31:12.005 | 99.00th=[34341], 99.50th=[34866], 99.90th=[36963], 99.95th=[36963], 00:31:12.005 | 99.99th=[36963] 00:31:12.005 bw ( KiB/s): min= 1916, max= 2432, per=4.59%, avg=2154.84, stdev=196.58, samples=19 00:31:12.005 iops : min= 479, max= 608, avg=538.63, stdev=49.09, samples=19 00:31:12.005 lat (msec) : 4=0.43%, 10=0.46%, 20=7.83%, 50=91.28% 00:31:12.005 cpu : usr=99.10%, sys=0.63%, ctx=6, majf=0, minf=23 00:31:12.005 IO depths : 1=6.2%, 2=12.4%, 4=24.7%, 8=50.4%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:12.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.005 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.005 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.005 filename0: (groupid=0, jobs=1): err= 0: pid=1200063: Mon Jul 15 20:26:07 2024 00:31:12.005 read: IOPS=485, BW=1943KiB/s (1990kB/s)(19.0MiB/10014msec) 00:31:12.005 slat (nsec): min=5683, max=86240, avg=18024.68, stdev=14621.91 00:31:12.005 clat (usec): min=21568, max=40299, avg=32784.67, stdev=1150.58 00:31:12.005 lat (usec): min=21576, max=40307, avg=32802.69, stdev=1150.10 00:31:12.005 clat percentiles (usec): 00:31:12.005 | 1.00th=[28181], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:31:12.005 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:31:12.005 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:12.005 | 99.00th=[35390], 99.50th=[35390], 99.90th=[38011], 99.95th=[38011], 00:31:12.005 | 99.99th=[40109] 00:31:12.005 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1939.53, stdev=63.88, samples=19 00:31:12.005 iops : min= 448, max= 512, avg=484.84, stdev=15.90, samples=19 00:31:12.005 lat (msec) : 50=100.00% 00:31:12.005 cpu : usr=97.17%, sys=1.47%, ctx=41, majf=0, minf=34 00:31:12.005 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:12.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.005 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.005 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.005 filename0: (groupid=0, jobs=1): err= 0: pid=1200064: Mon Jul 15 20:26:07 2024 00:31:12.005 read: IOPS=484, BW=1937KiB/s (1983kB/s)(18.9MiB/10012msec) 00:31:12.005 slat (nsec): min=5731, max=83767, avg=22990.22, stdev=13313.14 00:31:12.005 clat (usec): min=22607, max=47966, avg=32838.56, stdev=1357.32 00:31:12.005 lat (usec): min=22613, max=47985, avg=32861.55, stdev=1356.91 00:31:12.005 clat percentiles (usec): 00:31:12.005 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:31:12.005 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:31:12.005 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:12.005 | 99.00th=[35390], 99.50th=[39584], 99.90th=[47973], 99.95th=[47973], 00:31:12.005 | 99.99th=[47973] 00:31:12.005 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1933.00, stdev=71.02, samples=19 00:31:12.005 iops : min= 448, max= 512, avg=483.21, stdev=17.84, samples=19 00:31:12.005 lat (msec) : 50=100.00% 00:31:12.005 cpu : usr=96.42%, sys=1.99%, ctx=139, majf=0, minf=36 00:31:12.005 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:12.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.005 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.005 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.005 filename0: (groupid=0, jobs=1): err= 0: pid=1200065: Mon Jul 15 20:26:07 2024 00:31:12.005 read: IOPS=474, BW=1896KiB/s (1942kB/s)(18.5MiB/10005msec) 00:31:12.006 slat (nsec): min=5499, max=97717, avg=13474.77, stdev=11311.49 00:31:12.006 clat (usec): min=5829, max=65443, avg=33675.80, stdev=5835.55 00:31:12.006 lat (usec): min=5836, max=65456, avg=33689.27, stdev=5835.35 00:31:12.006 clat percentiles (usec): 00:31:12.006 | 1.00th=[15533], 5.00th=[24773], 10.00th=[30540], 20.00th=[32113], 00:31:12.006 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:31:12.006 | 70.00th=[33817], 80.00th=[34341], 90.00th=[41157], 95.00th=[45351], 00:31:12.006 | 99.00th=[53740], 99.50th=[59507], 99.90th=[65274], 99.95th=[65274], 00:31:12.006 | 99.99th=[65274] 00:31:12.006 bw ( KiB/s): min= 1616, max= 2048, per=4.02%, avg=1886.53, stdev=112.05, samples=19 00:31:12.006 iops : min= 404, max= 512, avg=471.63, stdev=28.01, samples=19 00:31:12.006 lat (msec) : 10=0.13%, 20=2.23%, 50=95.76%, 100=1.88% 00:31:12.006 cpu : usr=98.86%, sys=0.81%, ctx=22, majf=0, minf=36 00:31:12.006 IO depths : 1=0.6%, 2=1.3%, 4=7.4%, 8=76.3%, 16=14.4%, 32=0.0%, >=64=0.0% 00:31:12.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.006 complete : 0=0.0%, 4=90.3%, 8=6.4%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.006 issued rwts: total=4743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.006 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.006 filename0: (groupid=0, jobs=1): err= 0: pid=1200066: Mon Jul 15 20:26:07 2024 00:31:12.006 read: IOPS=484, BW=1939KiB/s (1985kB/s)(18.9MiB/10003msec) 00:31:12.006 slat (nsec): min=5489, max=67771, avg=19093.71, stdev=11643.25 00:31:12.006 clat (usec): min=10939, max=58406, avg=32842.53, stdev=2144.92 00:31:12.006 lat (usec): min=10944, max=58423, avg=32861.62, stdev=2145.09 00:31:12.006 clat percentiles (usec): 00:31:12.006 | 1.00th=[30802], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:31:12.006 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:31:12.006 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:12.006 | 99.00th=[36963], 99.50th=[37487], 99.90th=[58459], 99.95th=[58459], 00:31:12.006 | 99.99th=[58459] 00:31:12.006 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1932.79, stdev=58.31, samples=19 00:31:12.006 iops : min= 448, max= 512, avg=483.16, stdev=14.50, samples=19 00:31:12.006 lat (msec) : 20=0.37%, 50=99.30%, 100=0.33% 00:31:12.006 cpu : usr=99.09%, sys=0.58%, ctx=105, majf=0, minf=35 00:31:12.006 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:12.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.006 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.006 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.006 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.006 filename0: (groupid=0, jobs=1): err= 0: pid=1200067: Mon Jul 15 20:26:07 2024 00:31:12.006 read: IOPS=599, BW=2398KiB/s (2456kB/s)(23.4MiB/10007msec) 00:31:12.006 slat (nsec): min=2979, max=39504, avg=7112.52, stdev=2948.37 00:31:12.006 clat (usec): min=831, max=37397, avg=26622.48, stdev=6848.16 00:31:12.006 lat (usec): min=836, max=37405, avg=26629.59, stdev=6848.64 00:31:12.006 clat percentiles (usec): 00:31:12.006 | 1.00th=[ 2114], 5.00th=[17957], 10.00th=[19006], 20.00th=[21103], 00:31:12.006 | 30.00th=[22938], 40.00th=[23987], 50.00th=[25822], 60.00th=[32113], 00:31:12.006 | 70.00th=[32637], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:31:12.006 | 99.00th=[34341], 99.50th=[34341], 99.90th=[37487], 99.95th=[37487], 00:31:12.006 | 99.99th=[37487] 00:31:12.006 bw ( KiB/s): min= 1920, max= 3456, per=5.15%, avg=2417.42, stdev=348.56, samples=19 00:31:12.006 iops : min= 480, max= 864, avg=604.21, stdev=87.11, samples=19 00:31:12.006 lat (usec) : 1000=0.03% 00:31:12.006 lat (msec) : 2=0.33%, 4=2.27%, 10=0.30%, 20=12.62%, 50=84.45% 00:31:12.006 cpu : usr=98.77%, sys=0.92%, ctx=21, majf=0, minf=57 00:31:12.006 IO depths : 1=6.1%, 2=12.1%, 4=24.4%, 8=50.9%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:12.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.006 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.006 issued rwts: total=6000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.006 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.006 filename0: (groupid=0, jobs=1): err= 0: pid=1200068: Mon Jul 15 20:26:07 2024 00:31:12.006 read: IOPS=392, BW=1570KiB/s (1608kB/s)(15.3MiB/10004msec) 00:31:12.006 slat (nsec): min=5441, max=91574, avg=17444.13, stdev=13936.48 00:31:12.006 clat (usec): min=5031, max=78859, avg=40637.89, stdev=6869.22 00:31:12.006 lat (usec): min=5046, max=78882, avg=40655.33, stdev=6866.74 00:31:12.006 clat percentiles (usec): 00:31:12.006 | 1.00th=[18220], 5.00th=[32375], 10.00th=[32637], 20.00th=[33424], 00:31:12.006 | 30.00th=[39060], 40.00th=[41681], 50.00th=[42206], 60.00th=[43254], 00:31:12.006 | 70.00th=[44303], 80.00th=[45351], 90.00th=[46924], 95.00th=[47973], 00:31:12.006 | 99.00th=[50594], 99.50th=[53740], 99.90th=[79168], 99.95th=[79168], 00:31:12.006 | 99.99th=[79168] 00:31:12.006 bw ( KiB/s): min= 1408, max= 1920, per=3.31%, avg=1557.84, stdev=174.99, samples=19 00:31:12.006 iops : min= 352, max= 480, avg=389.42, stdev=43.72, samples=19 00:31:12.006 lat (msec) : 10=0.71%, 20=0.36%, 50=97.84%, 100=1.09% 00:31:12.006 cpu : usr=98.90%, sys=0.80%, ctx=21, majf=0, minf=49 00:31:12.006 IO depths : 1=0.7%, 2=1.5%, 4=17.1%, 8=68.3%, 16=12.5%, 32=0.0%, >=64=0.0% 00:31:12.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.006 complete : 0=0.0%, 4=93.5%, 8=1.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.006 issued rwts: total=3927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.006 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.006 filename0: (groupid=0, jobs=1): err= 0: pid=1200069: Mon Jul 15 20:26:07 2024 00:31:12.006 read: IOPS=484, BW=1939KiB/s (1986kB/s)(19.0MiB/10017msec) 00:31:12.006 slat (nsec): min=5578, max=75913, avg=14107.40, stdev=10191.88 00:31:12.006 clat (usec): min=16491, max=53045, avg=32893.17, stdev=3971.52 00:31:12.006 lat (usec): min=16513, max=53052, avg=32907.28, stdev=3971.41 00:31:12.006 clat percentiles (usec): 00:31:12.006 | 1.00th=[19530], 5.00th=[25560], 10.00th=[31327], 20.00th=[32113], 00:31:12.006 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:31:12.006 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[40109], 00:31:12.006 | 99.00th=[47449], 99.50th=[48497], 99.90th=[52691], 99.95th=[53216], 00:31:12.006 | 99.99th=[53216] 00:31:12.006 bw ( KiB/s): min= 1836, max= 2096, per=4.12%, avg=1934.53, stdev=64.45, samples=19 00:31:12.006 iops : min= 459, max= 524, avg=483.63, stdev=16.11, samples=19 00:31:12.006 lat (msec) : 20=1.52%, 50=98.15%, 100=0.33% 00:31:12.006 cpu : usr=98.85%, sys=0.82%, ctx=16, majf=0, minf=38 00:31:12.006 IO depths : 1=2.8%, 2=6.2%, 4=17.5%, 8=63.2%, 16=10.4%, 32=0.0%, >=64=0.0% 00:31:12.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.006 complete : 0=0.0%, 4=92.7%, 8=2.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.006 issued rwts: total=4856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.006 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.006 filename1: (groupid=0, jobs=1): err= 0: pid=1200070: Mon Jul 15 20:26:07 2024 00:31:12.006 read: IOPS=486, BW=1945KiB/s (1991kB/s)(19.0MiB/10004msec) 00:31:12.006 slat (nsec): min=5580, max=49505, avg=10685.54, stdev=6354.56 00:31:12.006 clat (usec): min=21612, max=38208, avg=32813.39, stdev=1288.73 00:31:12.006 lat (usec): min=21644, max=38216, avg=32824.07, stdev=1288.19 00:31:12.006 clat percentiles (usec): 00:31:12.006 | 1.00th=[28181], 5.00th=[31589], 10.00th=[31851], 20.00th=[32375], 00:31:12.006 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:31:12.006 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:12.006 | 99.00th=[34866], 99.50th=[35390], 99.90th=[38011], 99.95th=[38011], 00:31:12.006 | 99.99th=[38011] 00:31:12.006 bw ( KiB/s): min= 1916, max= 2048, per=4.13%, avg=1939.32, stdev=47.64, samples=19 00:31:12.006 iops : min= 479, max= 512, avg=484.79, stdev=11.82, samples=19 00:31:12.006 lat (msec) : 50=100.00% 00:31:12.006 cpu : usr=99.19%, sys=0.54%, ctx=9, majf=0, minf=49 00:31:12.006 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:12.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.006 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.006 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.006 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.006 filename1: (groupid=0, jobs=1): err= 0: pid=1200071: Mon Jul 15 20:26:07 2024 00:31:12.006 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10008msec) 00:31:12.006 slat (nsec): min=5714, max=82633, avg=21378.04, stdev=13616.24 00:31:12.006 clat (usec): min=24156, max=56592, avg=32840.62, stdev=1199.69 00:31:12.006 lat (usec): min=24162, max=56613, avg=32862.00, stdev=1198.98 00:31:12.006 clat percentiles (usec): 00:31:12.006 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:31:12.006 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:31:12.006 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:12.006 | 99.00th=[35390], 99.50th=[38011], 99.90th=[44303], 99.95th=[44303], 00:31:12.006 | 99.99th=[56361] 00:31:12.006 bw ( KiB/s): min= 1916, max= 2048, per=4.11%, avg=1932.84, stdev=40.61, samples=19 00:31:12.006 iops : min= 479, max= 512, avg=483.21, stdev=10.15, samples=19 00:31:12.006 lat (msec) : 50=99.96%, 100=0.04% 00:31:12.006 cpu : usr=98.89%, sys=0.69%, ctx=15, majf=0, minf=46 00:31:12.006 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:12.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.006 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.006 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.006 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.006 filename1: (groupid=0, jobs=1): err= 0: pid=1200072: Mon Jul 15 20:26:07 2024 00:31:12.006 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10008msec) 00:31:12.006 slat (nsec): min=5030, max=87850, avg=25099.12, stdev=16179.67 00:31:12.006 clat (usec): min=23583, max=55275, avg=32776.28, stdev=1184.22 00:31:12.006 lat (usec): min=23591, max=55292, avg=32801.38, stdev=1184.70 00:31:12.006 clat percentiles (usec): 00:31:12.006 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31589], 20.00th=[32113], 00:31:12.006 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:31:12.006 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:12.006 | 99.00th=[35390], 99.50th=[38011], 99.90th=[44303], 99.95th=[44303], 00:31:12.006 | 99.99th=[55313] 00:31:12.006 bw ( KiB/s): min= 1916, max= 2048, per=4.11%, avg=1932.84, stdev=40.61, samples=19 00:31:12.006 iops : min= 479, max= 512, avg=483.21, stdev=10.15, samples=19 00:31:12.006 lat (msec) : 50=99.96%, 100=0.04% 00:31:12.006 cpu : usr=98.34%, sys=0.91%, ctx=22, majf=0, minf=41 00:31:12.006 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:12.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.006 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.006 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.006 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.006 filename1: (groupid=0, jobs=1): err= 0: pid=1200073: Mon Jul 15 20:26:07 2024 00:31:12.006 read: IOPS=484, BW=1938KiB/s (1985kB/s)(18.9MiB/10004msec) 00:31:12.007 slat (nsec): min=5467, max=75811, avg=19443.49, stdev=12999.01 00:31:12.007 clat (usec): min=10494, max=57501, avg=32833.40, stdev=2274.65 00:31:12.007 lat (usec): min=10500, max=57516, avg=32852.84, stdev=2275.09 00:31:12.007 clat percentiles (usec): 00:31:12.007 | 1.00th=[30802], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:31:12.007 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:31:12.007 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:12.007 | 99.00th=[36963], 99.50th=[45351], 99.90th=[57410], 99.95th=[57410], 00:31:12.007 | 99.99th=[57410] 00:31:12.007 bw ( KiB/s): min= 1795, max= 2048, per=4.11%, avg=1932.95, stdev=57.91, samples=19 00:31:12.007 iops : min= 448, max= 512, avg=483.16, stdev=14.50, samples=19 00:31:12.007 lat (msec) : 20=0.52%, 50=99.15%, 100=0.33% 00:31:12.007 cpu : usr=96.49%, sys=1.83%, ctx=111, majf=0, minf=33 00:31:12.007 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:12.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.007 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.007 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.007 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.007 filename1: (groupid=0, jobs=1): err= 0: pid=1200074: Mon Jul 15 20:26:07 2024 00:31:12.007 read: IOPS=486, BW=1948KiB/s (1994kB/s)(19.0MiB/10004msec) 00:31:12.007 slat (nsec): min=5583, max=58285, avg=12293.27, stdev=8179.00 00:31:12.007 clat (usec): min=3139, max=68253, avg=32760.61, stdev=3904.14 00:31:12.007 lat (usec): min=3151, max=68275, avg=32772.91, stdev=3903.95 00:31:12.007 clat percentiles (usec): 00:31:12.007 | 1.00th=[20055], 5.00th=[28967], 10.00th=[31589], 20.00th=[32375], 00:31:12.007 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:31:12.007 | 70.00th=[33162], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:31:12.007 | 99.00th=[46400], 99.50th=[50070], 99.90th=[68682], 99.95th=[68682], 00:31:12.007 | 99.99th=[68682] 00:31:12.007 bw ( KiB/s): min= 1795, max= 2064, per=4.12%, avg=1934.63, stdev=55.22, samples=19 00:31:12.007 iops : min= 448, max= 516, avg=483.58, stdev=13.83, samples=19 00:31:12.007 lat (msec) : 4=0.02%, 10=0.29%, 20=0.60%, 50=98.56%, 100=0.53% 00:31:12.007 cpu : usr=99.18%, sys=0.53%, ctx=13, majf=0, minf=47 00:31:12.007 IO depths : 1=4.0%, 2=8.5%, 4=18.7%, 8=59.0%, 16=9.8%, 32=0.0%, >=64=0.0% 00:31:12.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.007 complete : 0=0.0%, 4=92.8%, 8=2.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.007 issued rwts: total=4871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.007 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.007 filename1: (groupid=0, jobs=1): err= 0: pid=1200075: Mon Jul 15 20:26:07 2024 00:31:12.007 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10008msec) 00:31:12.007 slat (nsec): min=5644, max=88313, avg=24739.48, stdev=16060.47 00:31:12.007 clat (usec): min=19498, max=49678, avg=32811.69, stdev=2313.84 00:31:12.007 lat (usec): min=19507, max=49686, avg=32836.42, stdev=2314.41 00:31:12.007 clat percentiles (usec): 00:31:12.007 | 1.00th=[23987], 5.00th=[31327], 10.00th=[31589], 20.00th=[32113], 00:31:12.007 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:31:12.007 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34866], 00:31:12.007 | 99.00th=[41681], 99.50th=[43779], 99.90th=[47973], 99.95th=[49546], 00:31:12.007 | 99.99th=[49546] 00:31:12.007 bw ( KiB/s): min= 1904, max= 2048, per=4.11%, avg=1932.84, stdev=40.87, samples=19 00:31:12.007 iops : min= 476, max= 512, avg=483.21, stdev=10.22, samples=19 00:31:12.007 lat (msec) : 20=0.04%, 50=99.96% 00:31:12.007 cpu : usr=98.85%, sys=0.80%, ctx=57, majf=0, minf=49 00:31:12.007 IO depths : 1=4.8%, 2=10.2%, 4=22.8%, 8=54.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:31:12.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.007 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.007 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.007 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.007 filename1: (groupid=0, jobs=1): err= 0: pid=1200077: Mon Jul 15 20:26:07 2024 00:31:12.007 read: IOPS=485, BW=1942KiB/s (1989kB/s)(19.0MiB/10004msec) 00:31:12.007 slat (nsec): min=5469, max=48242, avg=11264.52, stdev=7013.94 00:31:12.007 clat (usec): min=4324, max=68788, avg=32844.39, stdev=3057.84 00:31:12.007 lat (usec): min=4331, max=68806, avg=32855.65, stdev=3057.95 00:31:12.007 clat percentiles (usec): 00:31:12.007 | 1.00th=[30540], 5.00th=[31589], 10.00th=[31851], 20.00th=[32375], 00:31:12.007 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:31:12.007 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:12.007 | 99.00th=[36963], 99.50th=[36963], 99.90th=[68682], 99.95th=[68682], 00:31:12.007 | 99.99th=[68682] 00:31:12.007 bw ( KiB/s): min= 1795, max= 2048, per=4.10%, avg=1926.21, stdev=50.79, samples=19 00:31:12.007 iops : min= 448, max= 512, avg=481.47, stdev=12.71, samples=19 00:31:12.007 lat (msec) : 10=0.25%, 20=0.62%, 50=98.81%, 100=0.33% 00:31:12.007 cpu : usr=97.13%, sys=1.46%, ctx=84, majf=0, minf=44 00:31:12.007 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:31:12.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.007 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.007 issued rwts: total=4858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.007 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.007 filename1: (groupid=0, jobs=1): err= 0: pid=1200078: Mon Jul 15 20:26:07 2024 00:31:12.007 read: IOPS=484, BW=1936KiB/s (1983kB/s)(18.9MiB/10014msec) 00:31:12.007 slat (nsec): min=5647, max=87357, avg=10794.96, stdev=8521.95 00:31:12.007 clat (usec): min=25371, max=49570, avg=32953.95, stdev=1304.36 00:31:12.007 lat (usec): min=25378, max=49600, avg=32964.75, stdev=1305.23 00:31:12.007 clat percentiles (usec): 00:31:12.007 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31851], 20.00th=[32375], 00:31:12.007 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:31:12.007 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:12.007 | 99.00th=[35390], 99.50th=[38011], 99.90th=[49546], 99.95th=[49546], 00:31:12.007 | 99.99th=[49546] 00:31:12.007 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1932.79, stdev=72.25, samples=19 00:31:12.007 iops : min= 448, max= 512, avg=483.16, stdev=18.00, samples=19 00:31:12.007 lat (msec) : 50=100.00% 00:31:12.007 cpu : usr=98.95%, sys=0.75%, ctx=12, majf=0, minf=37 00:31:12.007 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:12.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.007 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.007 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.007 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.007 filename2: (groupid=0, jobs=1): err= 0: pid=1200079: Mon Jul 15 20:26:07 2024 00:31:12.007 read: IOPS=543, BW=2175KiB/s (2227kB/s)(21.3MiB/10019msec) 00:31:12.007 slat (nsec): min=5572, max=94000, avg=9657.79, stdev=7486.47 00:31:12.007 clat (usec): min=14463, max=54317, avg=29343.23, stdev=6072.08 00:31:12.007 lat (usec): min=14469, max=54341, avg=29352.89, stdev=6073.96 00:31:12.007 clat percentiles (usec): 00:31:12.007 | 1.00th=[15664], 5.00th=[18744], 10.00th=[20317], 20.00th=[23200], 00:31:12.007 | 30.00th=[24773], 40.00th=[31327], 50.00th=[32113], 60.00th=[32637], 00:31:12.007 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[35390], 00:31:12.007 | 99.00th=[46400], 99.50th=[46924], 99.90th=[54264], 99.95th=[54264], 00:31:12.007 | 99.99th=[54264] 00:31:12.007 bw ( KiB/s): min= 1916, max= 2864, per=4.64%, avg=2177.75, stdev=297.68, samples=20 00:31:12.007 iops : min= 479, max= 716, avg=544.40, stdev=74.42, samples=20 00:31:12.007 lat (msec) : 20=9.54%, 50=90.35%, 100=0.11% 00:31:12.007 cpu : usr=97.32%, sys=1.39%, ctx=49, majf=0, minf=95 00:31:12.007 IO depths : 1=2.4%, 2=5.2%, 4=14.9%, 8=66.9%, 16=10.6%, 32=0.0%, >=64=0.0% 00:31:12.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.007 complete : 0=0.0%, 4=91.5%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.007 issued rwts: total=5448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.007 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.007 filename2: (groupid=0, jobs=1): err= 0: pid=1200080: Mon Jul 15 20:26:07 2024 00:31:12.007 read: IOPS=486, BW=1945KiB/s (1991kB/s)(19.0MiB/10004msec) 00:31:12.007 slat (nsec): min=5605, max=46093, avg=9709.09, stdev=5699.33 00:31:12.007 clat (usec): min=18585, max=49290, avg=32822.07, stdev=1755.48 00:31:12.007 lat (usec): min=18591, max=49313, avg=32831.78, stdev=1755.69 00:31:12.007 clat percentiles (usec): 00:31:12.007 | 1.00th=[22676], 5.00th=[31589], 10.00th=[31851], 20.00th=[32375], 00:31:12.007 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:31:12.007 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:12.007 | 99.00th=[36963], 99.50th=[36963], 99.90th=[46924], 99.95th=[46924], 00:31:12.007 | 99.99th=[49546] 00:31:12.007 bw ( KiB/s): min= 1916, max= 2048, per=4.13%, avg=1939.53, stdev=47.54, samples=19 00:31:12.007 iops : min= 479, max= 512, avg=484.84, stdev=11.80, samples=19 00:31:12.007 lat (msec) : 20=0.37%, 50=99.63% 00:31:12.007 cpu : usr=99.02%, sys=0.63%, ctx=114, majf=0, minf=46 00:31:12.007 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:12.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.007 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.007 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.007 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.007 filename2: (groupid=0, jobs=1): err= 0: pid=1200081: Mon Jul 15 20:26:07 2024 00:31:12.007 read: IOPS=484, BW=1939KiB/s (1985kB/s)(18.9MiB/10003msec) 00:31:12.007 slat (nsec): min=5684, max=68707, avg=17362.14, stdev=11235.41 00:31:12.007 clat (usec): min=10821, max=72427, avg=32856.59, stdev=2531.84 00:31:12.007 lat (usec): min=10834, max=72444, avg=32873.95, stdev=2531.95 00:31:12.007 clat percentiles (usec): 00:31:12.007 | 1.00th=[28443], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:31:12.007 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:31:12.007 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:12.007 | 99.00th=[37487], 99.50th=[45876], 99.90th=[58459], 99.95th=[58459], 00:31:12.007 | 99.99th=[72877] 00:31:12.007 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1932.79, stdev=58.31, samples=19 00:31:12.007 iops : min= 448, max= 512, avg=483.16, stdev=14.50, samples=19 00:31:12.007 lat (msec) : 20=0.70%, 50=98.97%, 100=0.33% 00:31:12.007 cpu : usr=99.25%, sys=0.46%, ctx=60, majf=0, minf=39 00:31:12.007 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:12.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.007 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.007 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.007 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.007 filename2: (groupid=0, jobs=1): err= 0: pid=1200082: Mon Jul 15 20:26:07 2024 00:31:12.007 read: IOPS=484, BW=1937KiB/s (1984kB/s)(18.9MiB/10010msec) 00:31:12.007 slat (nsec): min=5607, max=88266, avg=20221.93, stdev=15568.30 00:31:12.008 clat (usec): min=25085, max=49607, avg=32870.28, stdev=1191.96 00:31:12.008 lat (usec): min=25094, max=49629, avg=32890.50, stdev=1191.47 00:31:12.008 clat percentiles (usec): 00:31:12.008 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:31:12.008 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:31:12.008 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:12.008 | 99.00th=[35390], 99.50th=[38011], 99.90th=[45876], 99.95th=[45876], 00:31:12.008 | 99.99th=[49546] 00:31:12.008 bw ( KiB/s): min= 1795, max= 2048, per=4.12%, avg=1933.00, stdev=58.51, samples=19 00:31:12.008 iops : min= 448, max= 512, avg=483.21, stdev=14.73, samples=19 00:31:12.008 lat (msec) : 50=100.00% 00:31:12.008 cpu : usr=96.99%, sys=1.54%, ctx=88, majf=0, minf=43 00:31:12.008 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:12.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.008 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.008 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.008 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.008 filename2: (groupid=0, jobs=1): err= 0: pid=1200083: Mon Jul 15 20:26:07 2024 00:31:12.008 read: IOPS=484, BW=1937KiB/s (1984kB/s)(18.9MiB/10010msec) 00:31:12.008 slat (nsec): min=4938, max=94902, avg=25789.16, stdev=15793.52 00:31:12.008 clat (usec): min=21852, max=46040, avg=32808.29, stdev=1223.75 00:31:12.008 lat (usec): min=21858, max=46055, avg=32834.08, stdev=1223.41 00:31:12.008 clat percentiles (usec): 00:31:12.008 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:31:12.008 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:31:12.008 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:12.008 | 99.00th=[35390], 99.50th=[38011], 99.90th=[45876], 99.95th=[45876], 00:31:12.008 | 99.99th=[45876] 00:31:12.008 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1932.84, stdev=58.90, samples=19 00:31:12.008 iops : min= 448, max= 512, avg=483.21, stdev=14.73, samples=19 00:31:12.008 lat (msec) : 50=100.00% 00:31:12.008 cpu : usr=99.19%, sys=0.50%, ctx=48, majf=0, minf=33 00:31:12.008 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:12.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.008 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.008 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.008 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.008 filename2: (groupid=0, jobs=1): err= 0: pid=1200084: Mon Jul 15 20:26:07 2024 00:31:12.008 read: IOPS=484, BW=1939KiB/s (1986kB/s)(19.0MiB/10013msec) 00:31:12.008 slat (nsec): min=5574, max=88490, avg=21062.46, stdev=14626.34 00:31:12.008 clat (usec): min=19511, max=53230, avg=32813.70, stdev=1741.91 00:31:12.008 lat (usec): min=19518, max=53254, avg=32834.77, stdev=1741.58 00:31:12.008 clat percentiles (usec): 00:31:12.008 | 1.00th=[30802], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:31:12.008 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:31:12.008 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:12.008 | 99.00th=[37487], 99.50th=[39060], 99.90th=[53216], 99.95th=[53216], 00:31:12.008 | 99.99th=[53216] 00:31:12.008 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1935.37, stdev=71.92, samples=19 00:31:12.008 iops : min= 448, max= 512, avg=483.84, stdev=17.98, samples=19 00:31:12.008 lat (msec) : 20=0.12%, 50=99.55%, 100=0.33% 00:31:12.008 cpu : usr=98.86%, sys=0.78%, ctx=77, majf=0, minf=36 00:31:12.008 IO depths : 1=6.2%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:12.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.008 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.008 issued rwts: total=4854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.008 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.008 filename2: (groupid=0, jobs=1): err= 0: pid=1200085: Mon Jul 15 20:26:07 2024 00:31:12.008 read: IOPS=484, BW=1938KiB/s (1985kB/s)(18.9MiB/10005msec) 00:31:12.008 slat (nsec): min=5574, max=72855, avg=10989.95, stdev=8797.85 00:31:12.008 clat (usec): min=17417, max=46284, avg=32929.93, stdev=1407.10 00:31:12.008 lat (usec): min=17426, max=46291, avg=32940.92, stdev=1406.68 00:31:12.008 clat percentiles (usec): 00:31:12.008 | 1.00th=[30802], 5.00th=[31589], 10.00th=[31851], 20.00th=[32375], 00:31:12.008 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:31:12.008 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:12.008 | 99.00th=[37487], 99.50th=[40109], 99.90th=[45876], 99.95th=[45876], 00:31:12.008 | 99.99th=[46400] 00:31:12.008 bw ( KiB/s): min= 1900, max= 2048, per=4.11%, avg=1932.84, stdev=38.54, samples=19 00:31:12.008 iops : min= 475, max= 512, avg=483.21, stdev= 9.64, samples=19 00:31:12.008 lat (msec) : 20=0.21%, 50=99.79% 00:31:12.008 cpu : usr=98.89%, sys=0.77%, ctx=72, majf=0, minf=51 00:31:12.008 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:12.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.008 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.008 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.008 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.008 filename2: (groupid=0, jobs=1): err= 0: pid=1200086: Mon Jul 15 20:26:07 2024 00:31:12.008 read: IOPS=485, BW=1942KiB/s (1989kB/s)(19.0MiB/10005msec) 00:31:12.008 slat (usec): min=5, max=280, avg=18.12, stdev=12.65 00:31:12.008 clat (usec): min=5364, max=56024, avg=32773.63, stdev=2712.28 00:31:12.008 lat (usec): min=5370, max=56040, avg=32791.75, stdev=2713.70 00:31:12.008 clat percentiles (usec): 00:31:12.008 | 1.00th=[27395], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:31:12.008 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:31:12.008 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:12.008 | 99.00th=[37487], 99.50th=[46924], 99.90th=[55837], 99.95th=[55837], 00:31:12.008 | 99.99th=[55837] 00:31:12.008 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1932.84, stdev=59.20, samples=19 00:31:12.008 iops : min= 448, max= 512, avg=483.21, stdev=14.80, samples=19 00:31:12.008 lat (msec) : 10=0.21%, 20=0.70%, 50=98.76%, 100=0.33% 00:31:12.008 cpu : usr=97.39%, sys=1.52%, ctx=80, majf=0, minf=43 00:31:12.008 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:12.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.008 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.008 issued rwts: total=4858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.008 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:12.008 00:31:12.008 Run status group 0 (all jobs): 00:31:12.008 READ: bw=45.9MiB/s (48.1MB/s), 1570KiB/s-2398KiB/s (1608kB/s-2456kB/s), io=460MiB (482MB), run=10003-10026msec 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:12.008 bdev_null0 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.008 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:12.009 [2024-07-15 20:26:08.133757] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:12.009 bdev_null1 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:12.009 { 00:31:12.009 "params": { 00:31:12.009 "name": "Nvme$subsystem", 00:31:12.009 "trtype": "$TEST_TRANSPORT", 00:31:12.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:12.009 "adrfam": "ipv4", 00:31:12.009 "trsvcid": "$NVMF_PORT", 00:31:12.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:12.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:12.009 "hdgst": ${hdgst:-false}, 00:31:12.009 "ddgst": ${ddgst:-false} 00:31:12.009 }, 00:31:12.009 "method": "bdev_nvme_attach_controller" 00:31:12.009 } 00:31:12.009 EOF 00:31:12.009 )") 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:12.009 { 00:31:12.009 "params": { 00:31:12.009 "name": "Nvme$subsystem", 00:31:12.009 "trtype": "$TEST_TRANSPORT", 00:31:12.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:12.009 "adrfam": "ipv4", 00:31:12.009 "trsvcid": "$NVMF_PORT", 00:31:12.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:12.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:12.009 "hdgst": ${hdgst:-false}, 00:31:12.009 "ddgst": ${ddgst:-false} 00:31:12.009 }, 00:31:12.009 "method": "bdev_nvme_attach_controller" 00:31:12.009 } 00:31:12.009 EOF 00:31:12.009 )") 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:12.009 "params": { 00:31:12.009 "name": "Nvme0", 00:31:12.009 "trtype": "tcp", 00:31:12.009 "traddr": "10.0.0.2", 00:31:12.009 "adrfam": "ipv4", 00:31:12.009 "trsvcid": "4420", 00:31:12.009 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:12.009 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:12.009 "hdgst": false, 00:31:12.009 "ddgst": false 00:31:12.009 }, 00:31:12.009 "method": "bdev_nvme_attach_controller" 00:31:12.009 },{ 00:31:12.009 "params": { 00:31:12.009 "name": "Nvme1", 00:31:12.009 "trtype": "tcp", 00:31:12.009 "traddr": "10.0.0.2", 00:31:12.009 "adrfam": "ipv4", 00:31:12.009 "trsvcid": "4420", 00:31:12.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:12.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:12.009 "hdgst": false, 00:31:12.009 "ddgst": false 00:31:12.009 }, 00:31:12.009 "method": "bdev_nvme_attach_controller" 00:31:12.009 }' 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:12.009 20:26:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:12.009 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:12.009 ... 00:31:12.009 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:12.009 ... 00:31:12.009 fio-3.35 00:31:12.009 Starting 4 threads 00:31:12.009 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.298 00:31:17.298 filename0: (groupid=0, jobs=1): err= 0: pid=1202530: Mon Jul 15 20:26:14 2024 00:31:17.298 read: IOPS=2069, BW=16.2MiB/s (17.0MB/s)(80.9MiB/5003msec) 00:31:17.298 slat (nsec): min=5391, max=35153, avg=7911.33, stdev=2189.10 00:31:17.298 clat (usec): min=1833, max=45515, avg=3842.47, stdev=1324.86 00:31:17.298 lat (usec): min=1839, max=45550, avg=3850.38, stdev=1325.07 00:31:17.298 clat percentiles (usec): 00:31:17.298 | 1.00th=[ 2376], 5.00th=[ 2835], 10.00th=[ 3032], 20.00th=[ 3261], 00:31:17.298 | 30.00th=[ 3458], 40.00th=[ 3654], 50.00th=[ 3785], 60.00th=[ 3916], 00:31:17.298 | 70.00th=[ 4113], 80.00th=[ 4293], 90.00th=[ 4686], 95.00th=[ 4948], 00:31:17.298 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 6587], 99.95th=[45351], 00:31:17.298 | 99.99th=[45351] 00:31:17.298 bw ( KiB/s): min=15296, max=17008, per=25.04%, avg=16526.22, stdev=501.67, samples=9 00:31:17.298 iops : min= 1912, max= 2126, avg=2065.78, stdev=62.71, samples=9 00:31:17.298 lat (msec) : 2=0.10%, 4=63.88%, 10=35.95%, 50=0.08% 00:31:17.298 cpu : usr=96.78%, sys=2.98%, ctx=4, majf=0, minf=0 00:31:17.298 IO depths : 1=0.3%, 2=1.8%, 4=69.3%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.298 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.298 issued rwts: total=10356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.298 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:17.298 filename0: (groupid=0, jobs=1): err= 0: pid=1202531: Mon Jul 15 20:26:14 2024 00:31:17.298 read: IOPS=2071, BW=16.2MiB/s (17.0MB/s)(80.9MiB/5002msec) 00:31:17.298 slat (nsec): min=5386, max=27288, avg=6144.38, stdev=2131.60 00:31:17.298 clat (usec): min=1616, max=6546, avg=3845.61, stdev=665.30 00:31:17.298 lat (usec): min=1625, max=6552, avg=3851.75, stdev=665.15 00:31:17.298 clat percentiles (usec): 00:31:17.298 | 1.00th=[ 2311], 5.00th=[ 2835], 10.00th=[ 3032], 20.00th=[ 3294], 00:31:17.298 | 30.00th=[ 3490], 40.00th=[ 3687], 50.00th=[ 3818], 60.00th=[ 3982], 00:31:17.298 | 70.00th=[ 4146], 80.00th=[ 4359], 90.00th=[ 4752], 95.00th=[ 5014], 00:31:17.298 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 6194], 99.95th=[ 6325], 00:31:17.298 | 99.99th=[ 6521] 00:31:17.298 bw ( KiB/s): min=16336, max=17296, per=25.16%, avg=16606.22, stdev=275.79, samples=9 00:31:17.298 iops : min= 2042, max= 2162, avg=2075.78, stdev=34.47, samples=9 00:31:17.298 lat (msec) : 2=0.23%, 4=61.08%, 10=38.68% 00:31:17.298 cpu : usr=97.18%, sys=2.60%, ctx=8, majf=0, minf=9 00:31:17.298 IO depths : 1=0.3%, 2=1.4%, 4=69.6%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.298 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.298 issued rwts: total=10361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.298 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:17.298 filename1: (groupid=0, jobs=1): err= 0: pid=1202532: Mon Jul 15 20:26:14 2024 00:31:17.298 read: IOPS=2063, BW=16.1MiB/s (16.9MB/s)(80.7MiB/5003msec) 00:31:17.298 slat (nsec): min=5386, max=35513, avg=6070.11, stdev=1877.22 00:31:17.298 clat (usec): min=1897, max=44330, avg=3859.95, stdev=1292.53 00:31:17.298 lat (usec): min=1902, max=44366, avg=3866.02, stdev=1292.73 00:31:17.298 clat percentiles (usec): 00:31:17.298 | 1.00th=[ 2507], 5.00th=[ 2835], 10.00th=[ 3032], 20.00th=[ 3294], 00:31:17.298 | 30.00th=[ 3490], 40.00th=[ 3654], 50.00th=[ 3818], 60.00th=[ 3916], 00:31:17.298 | 70.00th=[ 4113], 80.00th=[ 4359], 90.00th=[ 4686], 95.00th=[ 4948], 00:31:17.298 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 6783], 99.95th=[44303], 00:31:17.298 | 99.99th=[44303] 00:31:17.298 bw ( KiB/s): min=15376, max=16880, per=25.01%, avg=16508.44, stdev=450.41, samples=9 00:31:17.298 iops : min= 1922, max= 2110, avg=2063.56, stdev=56.30, samples=9 00:31:17.298 lat (msec) : 2=0.07%, 4=62.68%, 10=37.18%, 50=0.08% 00:31:17.298 cpu : usr=97.36%, sys=2.38%, ctx=10, majf=0, minf=9 00:31:17.298 IO depths : 1=0.3%, 2=1.6%, 4=69.7%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.298 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.298 issued rwts: total=10324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.298 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:17.298 filename1: (groupid=0, jobs=1): err= 0: pid=1202533: Mon Jul 15 20:26:14 2024 00:31:17.298 read: IOPS=2046, BW=16.0MiB/s (16.8MB/s)(80.0MiB/5003msec) 00:31:17.298 slat (nsec): min=5389, max=28068, avg=6161.17, stdev=2175.82 00:31:17.298 clat (usec): min=1617, max=6634, avg=3891.36, stdev=673.84 00:31:17.298 lat (usec): min=1622, max=6640, avg=3897.52, stdev=673.67 00:31:17.298 clat percentiles (usec): 00:31:17.298 | 1.00th=[ 2442], 5.00th=[ 2835], 10.00th=[ 3064], 20.00th=[ 3326], 00:31:17.298 | 30.00th=[ 3523], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 4047], 00:31:17.298 | 70.00th=[ 4178], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5014], 00:31:17.298 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 6259], 99.95th=[ 6390], 00:31:17.298 | 99.99th=[ 6652] 00:31:17.298 bw ( KiB/s): min=16256, max=16576, per=24.84%, avg=16398.22, stdev=108.65, samples=9 00:31:17.298 iops : min= 2032, max= 2072, avg=2049.78, stdev=13.58, samples=9 00:31:17.298 lat (msec) : 2=0.19%, 4=58.13%, 10=41.68% 00:31:17.298 cpu : usr=97.32%, sys=2.46%, ctx=7, majf=0, minf=9 00:31:17.298 IO depths : 1=0.3%, 2=1.9%, 4=68.9%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.298 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.298 issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.298 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:17.298 00:31:17.298 Run status group 0 (all jobs): 00:31:17.298 READ: bw=64.5MiB/s (67.6MB/s), 16.0MiB/s-16.2MiB/s (16.8MB/s-17.0MB/s), io=323MiB (338MB), run=5002-5003msec 00:31:17.298 20:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:17.298 20:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:17.298 20:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:17.298 20:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:17.298 20:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:17.298 20:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:17.298 20:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.299 00:31:17.299 real 0m24.218s 00:31:17.299 user 5m14.046s 00:31:17.299 sys 0m4.400s 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:17.299 20:26:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.299 ************************************ 00:31:17.299 END TEST fio_dif_rand_params 00:31:17.299 ************************************ 00:31:17.299 20:26:14 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:17.299 20:26:14 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:17.299 20:26:14 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:17.299 20:26:14 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:17.299 20:26:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:17.299 ************************************ 00:31:17.299 START TEST fio_dif_digest 00:31:17.299 ************************************ 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:17.299 bdev_null0 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:17.299 [2024-07-15 20:26:14.663407] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:17.299 20:26:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:17.299 { 00:31:17.299 "params": { 00:31:17.299 "name": "Nvme$subsystem", 00:31:17.299 "trtype": "$TEST_TRANSPORT", 00:31:17.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:17.299 "adrfam": "ipv4", 00:31:17.299 "trsvcid": "$NVMF_PORT", 00:31:17.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:17.300 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:17.300 "hdgst": ${hdgst:-false}, 00:31:17.300 "ddgst": ${ddgst:-false} 00:31:17.300 }, 00:31:17.300 "method": "bdev_nvme_attach_controller" 00:31:17.300 } 00:31:17.300 EOF 00:31:17.300 )") 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:17.300 "params": { 00:31:17.300 "name": "Nvme0", 00:31:17.300 "trtype": "tcp", 00:31:17.300 "traddr": "10.0.0.2", 00:31:17.300 "adrfam": "ipv4", 00:31:17.300 "trsvcid": "4420", 00:31:17.300 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:17.300 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:17.300 "hdgst": true, 00:31:17.300 "ddgst": true 00:31:17.300 }, 00:31:17.300 "method": "bdev_nvme_attach_controller" 00:31:17.300 }' 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:17.300 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:17.580 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:17.580 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:17.580 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:17.580 20:26:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:17.839 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:17.839 ... 00:31:17.839 fio-3.35 00:31:17.839 Starting 3 threads 00:31:17.839 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.064 00:31:30.064 filename0: (groupid=0, jobs=1): err= 0: pid=1203782: Mon Jul 15 20:26:25 2024 00:31:30.064 read: IOPS=113, BW=14.2MiB/s (14.9MB/s)(142MiB/10034msec) 00:31:30.064 slat (nsec): min=5689, max=30692, avg=6671.78, stdev=1201.68 00:31:30.064 clat (usec): min=7801, max=96680, avg=26437.54, stdev=20529.26 00:31:30.064 lat (usec): min=7808, max=96686, avg=26444.21, stdev=20529.28 00:31:30.064 clat percentiles (usec): 00:31:30.064 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[11338], 00:31:30.064 | 30.00th=[12387], 40.00th=[13435], 50.00th=[14222], 60.00th=[15270], 00:31:30.064 | 70.00th=[51119], 80.00th=[52691], 90.00th=[53740], 95.00th=[54789], 00:31:30.064 | 99.00th=[92799], 99.50th=[94897], 99.90th=[95945], 99.95th=[96994], 00:31:30.064 | 99.99th=[96994] 00:31:30.064 bw ( KiB/s): min=11264, max=18944, per=30.97%, avg=14528.00, stdev=2269.88, samples=20 00:31:30.064 iops : min= 88, max= 148, avg=113.50, stdev=17.73, samples=20 00:31:30.064 lat (msec) : 10=8.52%, 20=58.96%, 50=0.26%, 100=32.25% 00:31:30.064 cpu : usr=97.15%, sys=2.62%, ctx=16, majf=0, minf=151 00:31:30.064 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.064 issued rwts: total=1138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.064 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:30.064 filename0: (groupid=0, jobs=1): err= 0: pid=1203783: Mon Jul 15 20:26:25 2024 00:31:30.064 read: IOPS=132, BW=16.6MiB/s (17.4MB/s)(166MiB/10015msec) 00:31:30.064 slat (nsec): min=5789, max=32155, avg=6540.28, stdev=1291.82 00:31:30.064 clat (usec): min=7772, max=97782, avg=22593.98, stdev=19723.00 00:31:30.064 lat (usec): min=7778, max=97789, avg=22600.52, stdev=19722.99 00:31:30.064 clat percentiles (usec): 00:31:30.064 | 1.00th=[ 8160], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[10159], 00:31:30.064 | 30.00th=[10945], 40.00th=[11994], 50.00th=[13042], 60.00th=[14353], 00:31:30.064 | 70.00th=[15401], 80.00th=[51119], 90.00th=[53740], 95.00th=[54789], 00:31:30.064 | 99.00th=[92799], 99.50th=[93848], 99.90th=[95945], 99.95th=[98042], 00:31:30.064 | 99.99th=[98042] 00:31:30.064 bw ( KiB/s): min= 8960, max=23552, per=36.18%, avg=16972.80, stdev=3444.73, samples=20 00:31:30.064 iops : min= 70, max= 184, avg=132.60, stdev=26.91, samples=20 00:31:30.064 lat (msec) : 10=18.89%, 20=56.96%, 50=0.38%, 100=23.78% 00:31:30.064 cpu : usr=96.47%, sys=3.28%, ctx=60, majf=0, minf=74 00:31:30.064 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.064 issued rwts: total=1329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.064 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:30.064 filename0: (groupid=0, jobs=1): err= 0: pid=1203784: Mon Jul 15 20:26:25 2024 00:31:30.064 read: IOPS=120, BW=15.1MiB/s (15.8MB/s)(151MiB/10027msec) 00:31:30.064 slat (nsec): min=5777, max=32836, avg=6516.67, stdev=1243.22 00:31:30.064 clat (usec): min=7510, max=95749, avg=24847.77, stdev=20591.11 00:31:30.064 lat (usec): min=7516, max=95755, avg=24854.28, stdev=20591.09 00:31:30.064 clat percentiles (usec): 00:31:30.064 | 1.00th=[ 8291], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10683], 00:31:30.064 | 30.00th=[11600], 40.00th=[12780], 50.00th=[13829], 60.00th=[14877], 00:31:30.064 | 70.00th=[16581], 80.00th=[52167], 90.00th=[54264], 95.00th=[55313], 00:31:30.064 | 99.00th=[93848], 99.50th=[93848], 99.90th=[94897], 99.95th=[95945], 00:31:30.064 | 99.99th=[95945] 00:31:30.064 bw ( KiB/s): min=11776, max=22016, per=32.94%, avg=15449.60, stdev=2672.43, samples=20 00:31:30.064 iops : min= 92, max= 172, avg=120.70, stdev=20.88, samples=20 00:31:30.064 lat (msec) : 10=13.47%, 20=57.85%, 50=0.41%, 100=28.26% 00:31:30.064 cpu : usr=97.06%, sys=2.71%, ctx=15, majf=0, minf=137 00:31:30.064 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.064 issued rwts: total=1210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.064 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:30.064 00:31:30.064 Run status group 0 (all jobs): 00:31:30.064 READ: bw=45.8MiB/s (48.0MB/s), 14.2MiB/s-16.6MiB/s (14.9MB/s-17.4MB/s), io=460MiB (482MB), run=10015-10034msec 00:31:30.064 20:26:25 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:30.064 20:26:25 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:30.064 20:26:25 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.064 20:26:25 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:30.064 20:26:25 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:30.064 20:26:25 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:30.064 20:26:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.064 20:26:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:30.064 20:26:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.064 20:26:25 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:30.064 20:26:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.064 20:26:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:30.064 20:26:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.064 00:31:30.064 real 0m11.132s 00:31:30.064 user 0m41.104s 00:31:30.064 sys 0m1.162s 00:31:30.064 20:26:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:30.064 20:26:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:30.064 ************************************ 00:31:30.064 END TEST fio_dif_digest 00:31:30.064 ************************************ 00:31:30.064 20:26:25 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:30.064 20:26:25 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:30.064 20:26:25 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:30.064 20:26:25 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:30.064 20:26:25 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:30.064 20:26:25 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:30.064 20:26:25 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:30.064 20:26:25 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:30.064 20:26:25 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:30.064 rmmod nvme_tcp 00:31:30.064 rmmod nvme_fabrics 00:31:30.064 rmmod nvme_keyring 00:31:30.064 20:26:25 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:30.064 20:26:25 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:30.064 20:26:25 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:30.064 20:26:25 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1193488 ']' 00:31:30.064 20:26:25 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1193488 00:31:30.064 20:26:25 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1193488 ']' 00:31:30.064 20:26:25 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1193488 00:31:30.064 20:26:25 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:31:30.064 20:26:25 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:30.064 20:26:25 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1193488 00:31:30.064 20:26:25 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:30.064 20:26:25 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:30.064 20:26:25 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1193488' 00:31:30.065 killing process with pid 1193488 00:31:30.065 20:26:25 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1193488 00:31:30.065 20:26:25 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1193488 00:31:30.065 20:26:26 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:30.065 20:26:26 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:31.981 Waiting for block devices as requested 00:31:31.981 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:32.241 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:32.241 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:32.241 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:32.532 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:32.532 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:32.532 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:32.532 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:32.792 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:32.792 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:32.792 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:33.053 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:33.053 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:33.053 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:33.314 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:33.314 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:33.314 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:33.574 20:26:30 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:33.574 20:26:30 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:33.574 20:26:30 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:33.574 20:26:30 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:33.574 20:26:30 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.574 20:26:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:33.574 20:26:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.122 20:26:32 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:36.122 00:31:36.122 real 1m17.050s 00:31:36.122 user 7m56.755s 00:31:36.122 sys 0m19.395s 00:31:36.122 20:26:32 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:36.122 20:26:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:36.122 ************************************ 00:31:36.122 END TEST nvmf_dif 00:31:36.122 ************************************ 00:31:36.122 20:26:33 -- common/autotest_common.sh@1142 -- # return 0 00:31:36.122 20:26:33 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:36.122 20:26:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:36.122 20:26:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.122 20:26:33 -- common/autotest_common.sh@10 -- # set +x 00:31:36.122 ************************************ 00:31:36.122 START TEST nvmf_abort_qd_sizes 00:31:36.122 ************************************ 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:36.122 * Looking for test storage... 00:31:36.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:36.122 20:26:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:42.713 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:42.713 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:42.713 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:42.713 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:42.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:42.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.904 ms 00:31:42.713 00:31:42.713 --- 10.0.0.2 ping statistics --- 00:31:42.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.713 rtt min/avg/max/mdev = 0.904/0.904/0.904/0.000 ms 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:42.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:42.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.415 ms 00:31:42.713 00:31:42.713 --- 10.0.0.1 ping statistics --- 00:31:42.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.713 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:42.713 20:26:39 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:46.013 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:46.013 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:46.013 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:46.013 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:46.013 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:46.013 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:46.013 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:46.013 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:46.013 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:46.013 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:46.013 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:46.013 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:46.013 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:46.013 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:46.013 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:46.013 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:46.013 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1213140 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1213140 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1213140 ']' 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:46.274 20:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:46.274 [2024-07-15 20:26:43.607504] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:31:46.274 [2024-07-15 20:26:43.607561] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.274 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.274 [2024-07-15 20:26:43.677774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:46.534 [2024-07-15 20:26:43.748637] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:46.534 [2024-07-15 20:26:43.748674] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:46.534 [2024-07-15 20:26:43.748682] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:46.534 [2024-07-15 20:26:43.748688] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:46.534 [2024-07-15 20:26:43.748694] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:46.534 [2024-07-15 20:26:43.748831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:46.534 [2024-07-15 20:26:43.748954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:46.534 [2024-07-15 20:26:43.749155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.534 [2024-07-15 20:26:43.749156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:47.105 20:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:47.105 20:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:31:47.105 20:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:47.105 20:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:47.105 20:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:47.105 20:26:44 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:47.105 20:26:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:47.105 20:26:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:47.105 20:26:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:47.105 20:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:47.105 20:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:47.105 20:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:31:47.105 20:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:47.105 20:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:47.105 20:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:31:47.106 20:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:47.106 20:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:47.106 20:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:47.106 20:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:47.106 20:26:44 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:31:47.106 20:26:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:47.106 20:26:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:31:47.106 20:26:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:47.106 20:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:47.106 20:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:47.106 20:26:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:47.106 ************************************ 00:31:47.106 START TEST spdk_target_abort 00:31:47.106 ************************************ 00:31:47.106 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:31:47.106 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:47.106 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:31:47.106 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.106 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:47.366 spdk_targetn1 00:31:47.366 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.366 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:47.366 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.366 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:47.366 [2024-07-15 20:26:44.781205] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:47.366 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.366 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:47.366 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.366 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:47.628 [2024-07-15 20:26:44.821468] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:47.628 20:26:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:47.628 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.628 [2024-07-15 20:26:44.964286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:720 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:31:47.628 [2024-07-15 20:26:44.964315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:005b p:1 m:0 dnr:0 00:31:47.628 [2024-07-15 20:26:44.971673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:920 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:31:47.628 [2024-07-15 20:26:44.971690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0075 p:1 m:0 dnr:0 00:31:50.932 Initializing NVMe Controllers 00:31:50.932 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:50.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:50.932 Initialization complete. Launching workers. 00:31:50.932 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9944, failed: 2 00:31:50.932 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2969, failed to submit 6977 00:31:50.932 success 735, unsuccess 2234, failed 0 00:31:50.932 20:26:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:50.932 20:26:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:50.932 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.932 [2024-07-15 20:26:48.269248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:480 len:8 PRP1 0x200007c48000 PRP2 0x0 00:31:50.932 [2024-07-15 20:26:48.269281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:31:52.843 [2024-07-15 20:26:49.914758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:38128 len:8 PRP1 0x200007c60000 PRP2 0x0 00:31:52.843 [2024-07-15 20:26:49.914803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:00a3 p:1 m:0 dnr:0 00:31:53.785 [2024-07-15 20:26:51.020157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:63000 len:8 PRP1 0x200007c4c000 PRP2 0x0 00:31:53.785 [2024-07-15 20:26:51.020197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:00cc p:1 m:0 dnr:0 00:31:54.045 Initializing NVMe Controllers 00:31:54.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:54.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:54.045 Initialization complete. Launching workers. 00:31:54.045 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8586, failed: 3 00:31:54.045 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1221, failed to submit 7368 00:31:54.045 success 331, unsuccess 890, failed 0 00:31:54.045 20:26:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:54.045 20:26:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:54.045 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.400 Initializing NVMe Controllers 00:31:57.400 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:57.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:57.400 Initialization complete. Launching workers. 00:31:57.400 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41910, failed: 0 00:31:57.400 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2617, failed to submit 39293 00:31:57.400 success 595, unsuccess 2022, failed 0 00:31:57.400 20:26:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:57.400 20:26:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.400 20:26:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:57.400 20:26:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.400 20:26:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:57.400 20:26:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.400 20:26:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.308 20:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.308 20:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1213140 00:31:59.308 20:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1213140 ']' 00:31:59.308 20:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1213140 00:31:59.308 20:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:31:59.308 20:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:59.308 20:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1213140 00:31:59.308 20:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:59.308 20:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:59.308 20:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1213140' 00:31:59.308 killing process with pid 1213140 00:31:59.308 20:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1213140 00:31:59.308 20:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1213140 00:31:59.308 00:31:59.308 real 0m12.131s 00:31:59.308 user 0m49.084s 00:31:59.309 sys 0m2.032s 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.309 ************************************ 00:31:59.309 END TEST spdk_target_abort 00:31:59.309 ************************************ 00:31:59.309 20:26:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:31:59.309 20:26:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:59.309 20:26:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:59.309 20:26:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:59.309 20:26:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:59.309 ************************************ 00:31:59.309 START TEST kernel_target_abort 00:31:59.309 ************************************ 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:59.309 20:26:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:02.612 Waiting for block devices as requested 00:32:02.612 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:02.873 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:02.873 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:02.873 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:03.134 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:03.134 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:03.134 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:03.134 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:03.395 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:03.395 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:03.657 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:03.657 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:03.657 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:03.657 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:03.918 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:03.918 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:03.918 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:04.180 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:04.180 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:04.180 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:04.180 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:04.180 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:04.180 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:04.180 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:04.180 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:04.180 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:04.180 No valid GPT data, bailing 00:32:04.180 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:04.441 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:04.441 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:04.441 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:04.441 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:04.441 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:04.441 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:04.441 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:04.441 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:04.441 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:04.441 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:04.441 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:04.441 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:04.441 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:04.441 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:04.441 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:04.441 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:04.441 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:32:04.441 00:32:04.441 Discovery Log Number of Records 2, Generation counter 2 00:32:04.441 =====Discovery Log Entry 0====== 00:32:04.441 trtype: tcp 00:32:04.441 adrfam: ipv4 00:32:04.442 subtype: current discovery subsystem 00:32:04.442 treq: not specified, sq flow control disable supported 00:32:04.442 portid: 1 00:32:04.442 trsvcid: 4420 00:32:04.442 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:04.442 traddr: 10.0.0.1 00:32:04.442 eflags: none 00:32:04.442 sectype: none 00:32:04.442 =====Discovery Log Entry 1====== 00:32:04.442 trtype: tcp 00:32:04.442 adrfam: ipv4 00:32:04.442 subtype: nvme subsystem 00:32:04.442 treq: not specified, sq flow control disable supported 00:32:04.442 portid: 1 00:32:04.442 trsvcid: 4420 00:32:04.442 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:04.442 traddr: 10.0.0.1 00:32:04.442 eflags: none 00:32:04.442 sectype: none 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:04.442 20:27:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:04.442 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.743 Initializing NVMe Controllers 00:32:07.743 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:07.743 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:07.743 Initialization complete. Launching workers. 00:32:07.743 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 46489, failed: 0 00:32:07.743 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 46489, failed to submit 0 00:32:07.743 success 0, unsuccess 46489, failed 0 00:32:07.743 20:27:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:07.743 20:27:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:07.743 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.046 Initializing NVMe Controllers 00:32:11.046 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:11.046 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:11.047 Initialization complete. Launching workers. 00:32:11.047 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 86815, failed: 0 00:32:11.047 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21850, failed to submit 64965 00:32:11.047 success 0, unsuccess 21850, failed 0 00:32:11.047 20:27:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:11.047 20:27:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:11.047 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.592 Initializing NVMe Controllers 00:32:13.592 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:13.592 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:13.592 Initialization complete. Launching workers. 00:32:13.592 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82814, failed: 0 00:32:13.592 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20694, failed to submit 62120 00:32:13.592 success 0, unsuccess 20694, failed 0 00:32:13.592 20:27:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:13.592 20:27:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:13.592 20:27:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:13.852 20:27:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:13.852 20:27:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:13.852 20:27:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:13.852 20:27:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:13.852 20:27:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:13.852 20:27:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:13.852 20:27:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:17.151 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:17.151 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:17.151 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:17.151 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:17.151 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:17.151 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:17.151 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:17.151 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:17.151 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:17.151 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:17.412 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:17.412 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:17.412 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:17.412 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:17.412 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:17.412 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:19.373 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:19.373 00:32:19.373 real 0m20.019s 00:32:19.373 user 0m7.979s 00:32:19.373 sys 0m6.425s 00:32:19.373 20:27:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:19.373 20:27:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:19.373 ************************************ 00:32:19.373 END TEST kernel_target_abort 00:32:19.373 ************************************ 00:32:19.373 20:27:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:19.373 20:27:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:19.373 20:27:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:19.373 20:27:16 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:19.373 20:27:16 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:19.373 20:27:16 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:19.373 20:27:16 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:19.373 20:27:16 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:19.373 20:27:16 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:19.373 rmmod nvme_tcp 00:32:19.373 rmmod nvme_fabrics 00:32:19.373 rmmod nvme_keyring 00:32:19.634 20:27:16 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:19.634 20:27:16 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:19.634 20:27:16 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:19.635 20:27:16 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1213140 ']' 00:32:19.635 20:27:16 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1213140 00:32:19.635 20:27:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1213140 ']' 00:32:19.635 20:27:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1213140 00:32:19.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1213140) - No such process 00:32:19.635 20:27:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1213140 is not found' 00:32:19.635 Process with pid 1213140 is not found 00:32:19.635 20:27:16 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:19.635 20:27:16 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:22.938 Waiting for block devices as requested 00:32:22.938 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:22.938 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:22.938 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:22.938 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:22.938 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:23.199 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:23.199 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:23.199 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:23.460 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:23.460 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:23.720 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:23.720 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:23.720 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:23.720 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:23.980 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:23.980 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:23.980 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:24.240 20:27:21 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:24.240 20:27:21 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:24.240 20:27:21 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:24.240 20:27:21 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:24.240 20:27:21 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.240 20:27:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:24.240 20:27:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.784 20:27:23 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:26.784 00:32:26.784 real 0m50.627s 00:32:26.784 user 1m1.939s 00:32:26.784 sys 0m18.627s 00:32:26.784 20:27:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:26.784 20:27:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:26.784 ************************************ 00:32:26.784 END TEST nvmf_abort_qd_sizes 00:32:26.784 ************************************ 00:32:26.784 20:27:23 -- common/autotest_common.sh@1142 -- # return 0 00:32:26.784 20:27:23 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:26.784 20:27:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:26.784 20:27:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:26.784 20:27:23 -- common/autotest_common.sh@10 -- # set +x 00:32:26.784 ************************************ 00:32:26.784 START TEST keyring_file 00:32:26.784 ************************************ 00:32:26.784 20:27:23 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:26.784 * Looking for test storage... 00:32:26.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:26.784 20:27:23 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:26.784 20:27:23 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.784 20:27:23 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:26.784 20:27:23 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.784 20:27:23 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.784 20:27:23 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.784 20:27:23 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.784 20:27:23 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.784 20:27:23 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.784 20:27:23 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.784 20:27:23 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.784 20:27:23 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.784 20:27:23 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.784 20:27:23 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:26.784 20:27:23 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:26.784 20:27:23 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.785 20:27:23 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.785 20:27:23 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.785 20:27:23 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.785 20:27:23 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.785 20:27:23 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.785 20:27:23 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.785 20:27:23 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:26.785 20:27:23 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:26.785 20:27:23 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:26.785 20:27:23 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:26.785 20:27:23 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:26.785 20:27:23 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:26.785 20:27:23 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:26.785 20:27:23 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:26.785 20:27:23 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:26.785 20:27:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:26.785 20:27:23 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:26.785 20:27:23 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:26.785 20:27:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:26.785 20:27:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:26.785 20:27:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ueU61sMXK3 00:32:26.785 20:27:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:26.785 20:27:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ueU61sMXK3 00:32:26.785 20:27:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ueU61sMXK3 00:32:26.785 20:27:23 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ueU61sMXK3 00:32:26.785 20:27:23 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:26.785 20:27:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:26.785 20:27:23 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:26.785 20:27:23 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:26.785 20:27:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:26.785 20:27:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:26.785 20:27:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lDRH2PoT78 00:32:26.785 20:27:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:26.785 20:27:23 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:26.785 20:27:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lDRH2PoT78 00:32:26.785 20:27:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lDRH2PoT78 00:32:26.785 20:27:24 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.lDRH2PoT78 00:32:26.785 20:27:24 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:26.785 20:27:24 keyring_file -- keyring/file.sh@30 -- # tgtpid=1223831 00:32:26.785 20:27:24 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1223831 00:32:26.785 20:27:24 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1223831 ']' 00:32:26.785 20:27:24 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.785 20:27:24 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:26.785 20:27:24 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.785 20:27:24 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:26.785 20:27:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:26.785 [2024-07-15 20:27:24.055132] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:32:26.785 [2024-07-15 20:27:24.055205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1223831 ] 00:32:26.785 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.785 [2024-07-15 20:27:24.120453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.785 [2024-07-15 20:27:24.199242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:27.725 20:27:24 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:27.725 [2024-07-15 20:27:24.830911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.725 null0 00:32:27.725 [2024-07-15 20:27:24.862956] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:27.725 [2024-07-15 20:27:24.863218] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:27.725 [2024-07-15 20:27:24.870961] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.725 20:27:24 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:27.725 [2024-07-15 20:27:24.887005] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:27.725 request: 00:32:27.725 { 00:32:27.725 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:27.725 "secure_channel": false, 00:32:27.725 "listen_address": { 00:32:27.725 "trtype": "tcp", 00:32:27.725 "traddr": "127.0.0.1", 00:32:27.725 "trsvcid": "4420" 00:32:27.725 }, 00:32:27.725 "method": "nvmf_subsystem_add_listener", 00:32:27.725 "req_id": 1 00:32:27.725 } 00:32:27.725 Got JSON-RPC error response 00:32:27.725 response: 00:32:27.725 { 00:32:27.725 "code": -32602, 00:32:27.725 "message": "Invalid parameters" 00:32:27.725 } 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:27.725 20:27:24 keyring_file -- keyring/file.sh@46 -- # bperfpid=1223981 00:32:27.725 20:27:24 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1223981 /var/tmp/bperf.sock 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1223981 ']' 00:32:27.725 20:27:24 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:27.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:27.725 20:27:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:27.725 [2024-07-15 20:27:24.943058] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:32:27.725 [2024-07-15 20:27:24.943103] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1223981 ] 00:32:27.725 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.725 [2024-07-15 20:27:25.018573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.725 [2024-07-15 20:27:25.082635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.295 20:27:25 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:28.295 20:27:25 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:28.295 20:27:25 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ueU61sMXK3 00:32:28.295 20:27:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ueU61sMXK3 00:32:28.555 20:27:25 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lDRH2PoT78 00:32:28.555 20:27:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lDRH2PoT78 00:32:28.816 20:27:26 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:28.816 20:27:26 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:28.816 20:27:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:28.816 20:27:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:28.816 20:27:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:28.816 20:27:26 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ueU61sMXK3 == \/\t\m\p\/\t\m\p\.\u\e\U\6\1\s\M\X\K\3 ]] 00:32:28.816 20:27:26 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:28.816 20:27:26 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:28.816 20:27:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:28.816 20:27:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:28.816 20:27:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.076 20:27:26 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.lDRH2PoT78 == \/\t\m\p\/\t\m\p\.\l\D\R\H\2\P\o\T\7\8 ]] 00:32:29.076 20:27:26 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:29.076 20:27:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:29.076 20:27:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:29.076 20:27:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:29.076 20:27:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.076 20:27:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:29.076 20:27:26 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:29.076 20:27:26 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:29.076 20:27:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:29.076 20:27:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:29.076 20:27:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:29.076 20:27:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.076 20:27:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:29.337 20:27:26 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:29.337 20:27:26 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:29.337 20:27:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:29.598 [2024-07-15 20:27:26.783078] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:29.598 nvme0n1 00:32:29.598 20:27:26 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:29.598 20:27:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:29.598 20:27:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:29.598 20:27:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:29.598 20:27:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:29.598 20:27:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.859 20:27:27 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:29.859 20:27:27 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:29.859 20:27:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:29.859 20:27:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:29.859 20:27:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:29.859 20:27:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.859 20:27:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:29.859 20:27:27 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:29.859 20:27:27 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:29.859 Running I/O for 1 seconds... 00:32:31.240 00:32:31.240 Latency(us) 00:32:31.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.240 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:31.240 nvme0n1 : 1.02 7024.02 27.44 0.00 0.00 18060.42 8956.59 29272.75 00:32:31.240 =================================================================================================================== 00:32:31.240 Total : 7024.02 27.44 0.00 0.00 18060.42 8956.59 29272.75 00:32:31.240 0 00:32:31.240 20:27:28 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:31.240 20:27:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:31.240 20:27:28 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:31.240 20:27:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:31.240 20:27:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:31.240 20:27:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:31.240 20:27:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:31.240 20:27:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:31.240 20:27:28 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:31.240 20:27:28 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:31.240 20:27:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:31.240 20:27:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:31.240 20:27:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:31.240 20:27:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:31.240 20:27:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:31.500 20:27:28 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:31.500 20:27:28 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:31.500 20:27:28 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:31.500 20:27:28 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:31.500 20:27:28 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:31.500 20:27:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:31.500 20:27:28 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:31.500 20:27:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:31.500 20:27:28 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:31.500 20:27:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:31.760 [2024-07-15 20:27:28.986850] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:31.760 [2024-07-15 20:27:28.987806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10219d0 (107): Transport endpoint is not connected 00:32:31.760 [2024-07-15 20:27:28.988801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10219d0 (9): Bad file descriptor 00:32:31.760 [2024-07-15 20:27:28.989803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:31.760 [2024-07-15 20:27:28.989809] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:31.760 [2024-07-15 20:27:28.989815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:31.760 request: 00:32:31.760 { 00:32:31.760 "name": "nvme0", 00:32:31.760 "trtype": "tcp", 00:32:31.760 "traddr": "127.0.0.1", 00:32:31.760 "adrfam": "ipv4", 00:32:31.760 "trsvcid": "4420", 00:32:31.760 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:31.760 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:31.760 "prchk_reftag": false, 00:32:31.760 "prchk_guard": false, 00:32:31.760 "hdgst": false, 00:32:31.760 "ddgst": false, 00:32:31.760 "psk": "key1", 00:32:31.760 "method": "bdev_nvme_attach_controller", 00:32:31.760 "req_id": 1 00:32:31.760 } 00:32:31.760 Got JSON-RPC error response 00:32:31.760 response: 00:32:31.760 { 00:32:31.760 "code": -5, 00:32:31.760 "message": "Input/output error" 00:32:31.760 } 00:32:31.760 20:27:29 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:31.760 20:27:29 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:31.760 20:27:29 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:31.760 20:27:29 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:31.760 20:27:29 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:31.760 20:27:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:31.760 20:27:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:31.760 20:27:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:31.760 20:27:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:31.760 20:27:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:31.760 20:27:29 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:31.760 20:27:29 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:31.760 20:27:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:31.760 20:27:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:31.760 20:27:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:31.760 20:27:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:31.760 20:27:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.020 20:27:29 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:32.020 20:27:29 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:32.020 20:27:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:32.279 20:27:29 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:32.279 20:27:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:32.279 20:27:29 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:32.279 20:27:29 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:32.279 20:27:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.538 20:27:29 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:32.538 20:27:29 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ueU61sMXK3 00:32:32.538 20:27:29 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ueU61sMXK3 00:32:32.538 20:27:29 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:32.538 20:27:29 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ueU61sMXK3 00:32:32.538 20:27:29 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:32.538 20:27:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:32.538 20:27:29 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:32.538 20:27:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:32.538 20:27:29 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ueU61sMXK3 00:32:32.538 20:27:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ueU61sMXK3 00:32:32.538 [2024-07-15 20:27:29.942576] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ueU61sMXK3': 0100660 00:32:32.538 [2024-07-15 20:27:29.942593] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:32.538 request: 00:32:32.538 { 00:32:32.538 "name": "key0", 00:32:32.538 "path": "/tmp/tmp.ueU61sMXK3", 00:32:32.538 "method": "keyring_file_add_key", 00:32:32.538 "req_id": 1 00:32:32.538 } 00:32:32.538 Got JSON-RPC error response 00:32:32.538 response: 00:32:32.538 { 00:32:32.538 "code": -1, 00:32:32.538 "message": "Operation not permitted" 00:32:32.538 } 00:32:32.538 20:27:29 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:32.538 20:27:29 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:32.538 20:27:29 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:32.538 20:27:29 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:32.538 20:27:29 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ueU61sMXK3 00:32:32.538 20:27:29 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ueU61sMXK3 00:32:32.538 20:27:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ueU61sMXK3 00:32:32.797 20:27:30 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ueU61sMXK3 00:32:32.797 20:27:30 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:32.797 20:27:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:32.797 20:27:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:32.797 20:27:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:32.797 20:27:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.797 20:27:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:33.058 20:27:30 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:33.058 20:27:30 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:33.058 20:27:30 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:33.058 20:27:30 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:33.058 20:27:30 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:33.058 20:27:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:33.058 20:27:30 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:33.058 20:27:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:33.058 20:27:30 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:33.058 20:27:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:33.058 [2024-07-15 20:27:30.412108] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ueU61sMXK3': No such file or directory 00:32:33.058 [2024-07-15 20:27:30.412126] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:33.058 [2024-07-15 20:27:30.412142] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:33.058 [2024-07-15 20:27:30.412146] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:33.058 [2024-07-15 20:27:30.412151] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:33.058 request: 00:32:33.058 { 00:32:33.058 "name": "nvme0", 00:32:33.058 "trtype": "tcp", 00:32:33.058 "traddr": "127.0.0.1", 00:32:33.058 "adrfam": "ipv4", 00:32:33.058 "trsvcid": "4420", 00:32:33.058 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:33.058 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:33.058 "prchk_reftag": false, 00:32:33.058 "prchk_guard": false, 00:32:33.058 "hdgst": false, 00:32:33.058 "ddgst": false, 00:32:33.058 "psk": "key0", 00:32:33.058 "method": "bdev_nvme_attach_controller", 00:32:33.058 "req_id": 1 00:32:33.058 } 00:32:33.058 Got JSON-RPC error response 00:32:33.058 response: 00:32:33.058 { 00:32:33.058 "code": -19, 00:32:33.058 "message": "No such device" 00:32:33.058 } 00:32:33.058 20:27:30 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:33.058 20:27:30 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:33.058 20:27:30 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:33.058 20:27:30 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:33.058 20:27:30 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:33.058 20:27:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:33.318 20:27:30 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:33.318 20:27:30 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:33.318 20:27:30 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:33.318 20:27:30 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:33.318 20:27:30 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:33.318 20:27:30 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:33.318 20:27:30 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.59cg15Sqsw 00:32:33.318 20:27:30 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:33.318 20:27:30 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:33.318 20:27:30 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:33.318 20:27:30 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:33.318 20:27:30 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:33.318 20:27:30 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:33.318 20:27:30 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:33.318 20:27:30 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.59cg15Sqsw 00:32:33.318 20:27:30 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.59cg15Sqsw 00:32:33.318 20:27:30 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.59cg15Sqsw 00:32:33.318 20:27:30 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.59cg15Sqsw 00:32:33.318 20:27:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.59cg15Sqsw 00:32:33.579 20:27:30 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:33.579 20:27:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:33.839 nvme0n1 00:32:33.839 20:27:31 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:33.839 20:27:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:33.839 20:27:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:33.839 20:27:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:33.839 20:27:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:33.839 20:27:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:33.839 20:27:31 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:33.839 20:27:31 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:33.839 20:27:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:34.101 20:27:31 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:34.101 20:27:31 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:34.101 20:27:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:34.101 20:27:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:34.101 20:27:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:34.101 20:27:31 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:34.101 20:27:31 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:34.101 20:27:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:34.101 20:27:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:34.101 20:27:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:34.101 20:27:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:34.101 20:27:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:34.362 20:27:31 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:34.362 20:27:31 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:34.362 20:27:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:34.623 20:27:31 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:34.623 20:27:31 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:34.623 20:27:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:34.623 20:27:31 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:34.623 20:27:31 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.59cg15Sqsw 00:32:34.623 20:27:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.59cg15Sqsw 00:32:34.884 20:27:32 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lDRH2PoT78 00:32:34.884 20:27:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lDRH2PoT78 00:32:34.884 20:27:32 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:34.884 20:27:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:35.145 nvme0n1 00:32:35.145 20:27:32 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:35.145 20:27:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:35.511 20:27:32 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:35.511 "subsystems": [ 00:32:35.511 { 00:32:35.511 "subsystem": "keyring", 00:32:35.511 "config": [ 00:32:35.511 { 00:32:35.511 "method": "keyring_file_add_key", 00:32:35.511 "params": { 00:32:35.511 "name": "key0", 00:32:35.511 "path": "/tmp/tmp.59cg15Sqsw" 00:32:35.511 } 00:32:35.511 }, 00:32:35.511 { 00:32:35.511 "method": "keyring_file_add_key", 00:32:35.511 "params": { 00:32:35.511 "name": "key1", 00:32:35.511 "path": "/tmp/tmp.lDRH2PoT78" 00:32:35.511 } 00:32:35.511 } 00:32:35.511 ] 00:32:35.511 }, 00:32:35.511 { 00:32:35.511 "subsystem": "iobuf", 00:32:35.511 "config": [ 00:32:35.511 { 00:32:35.511 "method": "iobuf_set_options", 00:32:35.511 "params": { 00:32:35.511 "small_pool_count": 8192, 00:32:35.511 "large_pool_count": 1024, 00:32:35.511 "small_bufsize": 8192, 00:32:35.511 "large_bufsize": 135168 00:32:35.511 } 00:32:35.511 } 00:32:35.511 ] 00:32:35.511 }, 00:32:35.511 { 00:32:35.511 "subsystem": "sock", 00:32:35.511 "config": [ 00:32:35.511 { 00:32:35.511 "method": "sock_set_default_impl", 00:32:35.511 "params": { 00:32:35.511 "impl_name": "posix" 00:32:35.511 } 00:32:35.511 }, 00:32:35.511 { 00:32:35.511 "method": "sock_impl_set_options", 00:32:35.511 "params": { 00:32:35.511 "impl_name": "ssl", 00:32:35.511 "recv_buf_size": 4096, 00:32:35.511 "send_buf_size": 4096, 00:32:35.511 "enable_recv_pipe": true, 00:32:35.511 "enable_quickack": false, 00:32:35.511 "enable_placement_id": 0, 00:32:35.511 "enable_zerocopy_send_server": true, 00:32:35.511 "enable_zerocopy_send_client": false, 00:32:35.511 "zerocopy_threshold": 0, 00:32:35.511 "tls_version": 0, 00:32:35.511 "enable_ktls": false 00:32:35.511 } 00:32:35.511 }, 00:32:35.511 { 00:32:35.511 "method": "sock_impl_set_options", 00:32:35.511 "params": { 00:32:35.511 "impl_name": "posix", 00:32:35.511 "recv_buf_size": 2097152, 00:32:35.511 "send_buf_size": 2097152, 00:32:35.511 "enable_recv_pipe": true, 00:32:35.511 "enable_quickack": false, 00:32:35.511 "enable_placement_id": 0, 00:32:35.511 "enable_zerocopy_send_server": true, 00:32:35.511 "enable_zerocopy_send_client": false, 00:32:35.511 "zerocopy_threshold": 0, 00:32:35.511 "tls_version": 0, 00:32:35.511 "enable_ktls": false 00:32:35.511 } 00:32:35.511 } 00:32:35.511 ] 00:32:35.511 }, 00:32:35.511 { 00:32:35.511 "subsystem": "vmd", 00:32:35.511 "config": [] 00:32:35.511 }, 00:32:35.511 { 00:32:35.511 "subsystem": "accel", 00:32:35.511 "config": [ 00:32:35.511 { 00:32:35.511 "method": "accel_set_options", 00:32:35.511 "params": { 00:32:35.511 "small_cache_size": 128, 00:32:35.511 "large_cache_size": 16, 00:32:35.511 "task_count": 2048, 00:32:35.511 "sequence_count": 2048, 00:32:35.511 "buf_count": 2048 00:32:35.511 } 00:32:35.511 } 00:32:35.511 ] 00:32:35.511 }, 00:32:35.511 { 00:32:35.511 "subsystem": "bdev", 00:32:35.511 "config": [ 00:32:35.511 { 00:32:35.511 "method": "bdev_set_options", 00:32:35.511 "params": { 00:32:35.511 "bdev_io_pool_size": 65535, 00:32:35.511 "bdev_io_cache_size": 256, 00:32:35.511 "bdev_auto_examine": true, 00:32:35.511 "iobuf_small_cache_size": 128, 00:32:35.511 "iobuf_large_cache_size": 16 00:32:35.511 } 00:32:35.511 }, 00:32:35.511 { 00:32:35.511 "method": "bdev_raid_set_options", 00:32:35.511 "params": { 00:32:35.511 "process_window_size_kb": 1024 00:32:35.511 } 00:32:35.511 }, 00:32:35.511 { 00:32:35.511 "method": "bdev_iscsi_set_options", 00:32:35.511 "params": { 00:32:35.511 "timeout_sec": 30 00:32:35.511 } 00:32:35.511 }, 00:32:35.511 { 00:32:35.511 "method": "bdev_nvme_set_options", 00:32:35.511 "params": { 00:32:35.511 "action_on_timeout": "none", 00:32:35.511 "timeout_us": 0, 00:32:35.511 "timeout_admin_us": 0, 00:32:35.511 "keep_alive_timeout_ms": 10000, 00:32:35.511 "arbitration_burst": 0, 00:32:35.511 "low_priority_weight": 0, 00:32:35.511 "medium_priority_weight": 0, 00:32:35.511 "high_priority_weight": 0, 00:32:35.511 "nvme_adminq_poll_period_us": 10000, 00:32:35.511 "nvme_ioq_poll_period_us": 0, 00:32:35.511 "io_queue_requests": 512, 00:32:35.511 "delay_cmd_submit": true, 00:32:35.511 "transport_retry_count": 4, 00:32:35.511 "bdev_retry_count": 3, 00:32:35.511 "transport_ack_timeout": 0, 00:32:35.511 "ctrlr_loss_timeout_sec": 0, 00:32:35.511 "reconnect_delay_sec": 0, 00:32:35.511 "fast_io_fail_timeout_sec": 0, 00:32:35.511 "disable_auto_failback": false, 00:32:35.511 "generate_uuids": false, 00:32:35.511 "transport_tos": 0, 00:32:35.511 "nvme_error_stat": false, 00:32:35.511 "rdma_srq_size": 0, 00:32:35.511 "io_path_stat": false, 00:32:35.511 "allow_accel_sequence": false, 00:32:35.511 "rdma_max_cq_size": 0, 00:32:35.511 "rdma_cm_event_timeout_ms": 0, 00:32:35.511 "dhchap_digests": [ 00:32:35.511 "sha256", 00:32:35.511 "sha384", 00:32:35.511 "sha512" 00:32:35.511 ], 00:32:35.511 "dhchap_dhgroups": [ 00:32:35.511 "null", 00:32:35.511 "ffdhe2048", 00:32:35.511 "ffdhe3072", 00:32:35.511 "ffdhe4096", 00:32:35.511 "ffdhe6144", 00:32:35.511 "ffdhe8192" 00:32:35.511 ] 00:32:35.511 } 00:32:35.511 }, 00:32:35.511 { 00:32:35.511 "method": "bdev_nvme_attach_controller", 00:32:35.511 "params": { 00:32:35.511 "name": "nvme0", 00:32:35.511 "trtype": "TCP", 00:32:35.511 "adrfam": "IPv4", 00:32:35.511 "traddr": "127.0.0.1", 00:32:35.511 "trsvcid": "4420", 00:32:35.511 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:35.511 "prchk_reftag": false, 00:32:35.511 "prchk_guard": false, 00:32:35.511 "ctrlr_loss_timeout_sec": 0, 00:32:35.511 "reconnect_delay_sec": 0, 00:32:35.511 "fast_io_fail_timeout_sec": 0, 00:32:35.511 "psk": "key0", 00:32:35.511 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:35.511 "hdgst": false, 00:32:35.511 "ddgst": false 00:32:35.511 } 00:32:35.511 }, 00:32:35.511 { 00:32:35.511 "method": "bdev_nvme_set_hotplug", 00:32:35.511 "params": { 00:32:35.511 "period_us": 100000, 00:32:35.511 "enable": false 00:32:35.511 } 00:32:35.511 }, 00:32:35.511 { 00:32:35.511 "method": "bdev_wait_for_examine" 00:32:35.511 } 00:32:35.511 ] 00:32:35.511 }, 00:32:35.511 { 00:32:35.511 "subsystem": "nbd", 00:32:35.511 "config": [] 00:32:35.511 } 00:32:35.511 ] 00:32:35.511 }' 00:32:35.512 20:27:32 keyring_file -- keyring/file.sh@114 -- # killprocess 1223981 00:32:35.512 20:27:32 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1223981 ']' 00:32:35.512 20:27:32 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1223981 00:32:35.512 20:27:32 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:35.512 20:27:32 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:35.512 20:27:32 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1223981 00:32:35.512 20:27:32 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:35.512 20:27:32 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:35.512 20:27:32 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1223981' 00:32:35.512 killing process with pid 1223981 00:32:35.512 20:27:32 keyring_file -- common/autotest_common.sh@967 -- # kill 1223981 00:32:35.512 Received shutdown signal, test time was about 1.000000 seconds 00:32:35.512 00:32:35.512 Latency(us) 00:32:35.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.512 =================================================================================================================== 00:32:35.512 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:35.512 20:27:32 keyring_file -- common/autotest_common.sh@972 -- # wait 1223981 00:32:35.512 20:27:32 keyring_file -- keyring/file.sh@117 -- # bperfpid=1225613 00:32:35.512 20:27:32 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1225613 /var/tmp/bperf.sock 00:32:35.512 20:27:32 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1225613 ']' 00:32:35.512 20:27:32 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:35.512 20:27:32 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:35.512 20:27:32 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:35.512 20:27:32 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:35.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:35.512 20:27:32 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:35.512 20:27:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:35.512 20:27:32 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:35.512 "subsystems": [ 00:32:35.512 { 00:32:35.512 "subsystem": "keyring", 00:32:35.512 "config": [ 00:32:35.512 { 00:32:35.512 "method": "keyring_file_add_key", 00:32:35.512 "params": { 00:32:35.512 "name": "key0", 00:32:35.512 "path": "/tmp/tmp.59cg15Sqsw" 00:32:35.512 } 00:32:35.512 }, 00:32:35.512 { 00:32:35.512 "method": "keyring_file_add_key", 00:32:35.512 "params": { 00:32:35.512 "name": "key1", 00:32:35.512 "path": "/tmp/tmp.lDRH2PoT78" 00:32:35.512 } 00:32:35.512 } 00:32:35.512 ] 00:32:35.512 }, 00:32:35.512 { 00:32:35.512 "subsystem": "iobuf", 00:32:35.512 "config": [ 00:32:35.512 { 00:32:35.512 "method": "iobuf_set_options", 00:32:35.512 "params": { 00:32:35.512 "small_pool_count": 8192, 00:32:35.512 "large_pool_count": 1024, 00:32:35.512 "small_bufsize": 8192, 00:32:35.512 "large_bufsize": 135168 00:32:35.512 } 00:32:35.512 } 00:32:35.512 ] 00:32:35.512 }, 00:32:35.512 { 00:32:35.512 "subsystem": "sock", 00:32:35.512 "config": [ 00:32:35.512 { 00:32:35.512 "method": "sock_set_default_impl", 00:32:35.512 "params": { 00:32:35.512 "impl_name": "posix" 00:32:35.512 } 00:32:35.512 }, 00:32:35.512 { 00:32:35.512 "method": "sock_impl_set_options", 00:32:35.512 "params": { 00:32:35.512 "impl_name": "ssl", 00:32:35.512 "recv_buf_size": 4096, 00:32:35.512 "send_buf_size": 4096, 00:32:35.512 "enable_recv_pipe": true, 00:32:35.512 "enable_quickack": false, 00:32:35.512 "enable_placement_id": 0, 00:32:35.512 "enable_zerocopy_send_server": true, 00:32:35.512 "enable_zerocopy_send_client": false, 00:32:35.512 "zerocopy_threshold": 0, 00:32:35.512 "tls_version": 0, 00:32:35.512 "enable_ktls": false 00:32:35.512 } 00:32:35.512 }, 00:32:35.512 { 00:32:35.512 "method": "sock_impl_set_options", 00:32:35.512 "params": { 00:32:35.512 "impl_name": "posix", 00:32:35.512 "recv_buf_size": 2097152, 00:32:35.512 "send_buf_size": 2097152, 00:32:35.512 "enable_recv_pipe": true, 00:32:35.512 "enable_quickack": false, 00:32:35.512 "enable_placement_id": 0, 00:32:35.512 "enable_zerocopy_send_server": true, 00:32:35.512 "enable_zerocopy_send_client": false, 00:32:35.512 "zerocopy_threshold": 0, 00:32:35.512 "tls_version": 0, 00:32:35.512 "enable_ktls": false 00:32:35.512 } 00:32:35.512 } 00:32:35.512 ] 00:32:35.512 }, 00:32:35.512 { 00:32:35.512 "subsystem": "vmd", 00:32:35.512 "config": [] 00:32:35.512 }, 00:32:35.512 { 00:32:35.512 "subsystem": "accel", 00:32:35.512 "config": [ 00:32:35.512 { 00:32:35.512 "method": "accel_set_options", 00:32:35.512 "params": { 00:32:35.512 "small_cache_size": 128, 00:32:35.512 "large_cache_size": 16, 00:32:35.512 "task_count": 2048, 00:32:35.512 "sequence_count": 2048, 00:32:35.512 "buf_count": 2048 00:32:35.512 } 00:32:35.512 } 00:32:35.512 ] 00:32:35.512 }, 00:32:35.512 { 00:32:35.512 "subsystem": "bdev", 00:32:35.512 "config": [ 00:32:35.512 { 00:32:35.512 "method": "bdev_set_options", 00:32:35.512 "params": { 00:32:35.512 "bdev_io_pool_size": 65535, 00:32:35.512 "bdev_io_cache_size": 256, 00:32:35.512 "bdev_auto_examine": true, 00:32:35.512 "iobuf_small_cache_size": 128, 00:32:35.512 "iobuf_large_cache_size": 16 00:32:35.512 } 00:32:35.512 }, 00:32:35.512 { 00:32:35.512 "method": "bdev_raid_set_options", 00:32:35.512 "params": { 00:32:35.512 "process_window_size_kb": 1024 00:32:35.512 } 00:32:35.512 }, 00:32:35.512 { 00:32:35.512 "method": "bdev_iscsi_set_options", 00:32:35.512 "params": { 00:32:35.512 "timeout_sec": 30 00:32:35.512 } 00:32:35.512 }, 00:32:35.512 { 00:32:35.512 "method": "bdev_nvme_set_options", 00:32:35.512 "params": { 00:32:35.512 "action_on_timeout": "none", 00:32:35.512 "timeout_us": 0, 00:32:35.512 "timeout_admin_us": 0, 00:32:35.512 "keep_alive_timeout_ms": 10000, 00:32:35.512 "arbitration_burst": 0, 00:32:35.512 "low_priority_weight": 0, 00:32:35.512 "medium_priority_weight": 0, 00:32:35.512 "high_priority_weight": 0, 00:32:35.512 "nvme_adminq_poll_period_us": 10000, 00:32:35.512 "nvme_ioq_poll_period_us": 0, 00:32:35.512 "io_queue_requests": 512, 00:32:35.512 "delay_cmd_submit": true, 00:32:35.512 "transport_retry_count": 4, 00:32:35.512 "bdev_retry_count": 3, 00:32:35.512 "transport_ack_timeout": 0, 00:32:35.512 "ctrlr_loss_timeout_sec": 0, 00:32:35.512 "reconnect_delay_sec": 0, 00:32:35.512 "fast_io_fail_timeout_sec": 0, 00:32:35.512 "disable_auto_failback": false, 00:32:35.512 "generate_uuids": false, 00:32:35.512 "transport_tos": 0, 00:32:35.512 "nvme_error_stat": false, 00:32:35.512 "rdma_srq_size": 0, 00:32:35.512 "io_path_stat": false, 00:32:35.512 "allow_accel_sequence": false, 00:32:35.512 "rdma_max_cq_size": 0, 00:32:35.512 "rdma_cm_event_timeout_ms": 0, 00:32:35.512 "dhchap_digests": [ 00:32:35.512 "sha256", 00:32:35.512 "sha384", 00:32:35.512 "sha512" 00:32:35.512 ], 00:32:35.512 "dhchap_dhgroups": [ 00:32:35.512 "null", 00:32:35.512 "ffdhe2048", 00:32:35.512 "ffdhe3072", 00:32:35.512 "ffdhe4096", 00:32:35.512 "ffdhe6144", 00:32:35.512 "ffdhe8192" 00:32:35.512 ] 00:32:35.512 } 00:32:35.512 }, 00:32:35.512 { 00:32:35.512 "method": "bdev_nvme_attach_controller", 00:32:35.512 "params": { 00:32:35.512 "name": "nvme0", 00:32:35.512 "trtype": "TCP", 00:32:35.512 "adrfam": "IPv4", 00:32:35.512 "traddr": "127.0.0.1", 00:32:35.512 "trsvcid": "4420", 00:32:35.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:35.512 "prchk_reftag": false, 00:32:35.512 "prchk_guard": false, 00:32:35.512 "ctrlr_loss_timeout_sec": 0, 00:32:35.512 "reconnect_delay_sec": 0, 00:32:35.512 "fast_io_fail_timeout_sec": 0, 00:32:35.512 "psk": "key0", 00:32:35.512 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:35.512 "hdgst": false, 00:32:35.512 "ddgst": false 00:32:35.512 } 00:32:35.512 }, 00:32:35.512 { 00:32:35.512 "method": "bdev_nvme_set_hotplug", 00:32:35.512 "params": { 00:32:35.512 "period_us": 100000, 00:32:35.512 "enable": false 00:32:35.512 } 00:32:35.512 }, 00:32:35.512 { 00:32:35.512 "method": "bdev_wait_for_examine" 00:32:35.512 } 00:32:35.512 ] 00:32:35.512 }, 00:32:35.512 { 00:32:35.512 "subsystem": "nbd", 00:32:35.512 "config": [] 00:32:35.512 } 00:32:35.512 ] 00:32:35.512 }' 00:32:35.773 [2024-07-15 20:27:32.983804] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:32:35.773 [2024-07-15 20:27:32.983861] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225613 ] 00:32:35.773 EAL: No free 2048 kB hugepages reported on node 1 00:32:35.773 [2024-07-15 20:27:33.059498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.773 [2024-07-15 20:27:33.113039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.033 [2024-07-15 20:27:33.254566] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:36.603 20:27:33 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:36.603 20:27:33 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:36.603 20:27:33 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:36.603 20:27:33 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:36.603 20:27:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:36.603 20:27:33 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:36.603 20:27:33 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:36.603 20:27:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:36.603 20:27:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:36.603 20:27:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:36.603 20:27:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:36.603 20:27:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:36.863 20:27:34 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:36.863 20:27:34 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:36.863 20:27:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:36.863 20:27:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:36.863 20:27:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:36.863 20:27:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:36.863 20:27:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:36.863 20:27:34 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:36.863 20:27:34 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:36.863 20:27:34 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:36.863 20:27:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:37.124 20:27:34 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:37.124 20:27:34 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:37.124 20:27:34 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.59cg15Sqsw /tmp/tmp.lDRH2PoT78 00:32:37.124 20:27:34 keyring_file -- keyring/file.sh@20 -- # killprocess 1225613 00:32:37.124 20:27:34 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1225613 ']' 00:32:37.124 20:27:34 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1225613 00:32:37.124 20:27:34 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:37.124 20:27:34 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:37.124 20:27:34 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1225613 00:32:37.124 20:27:34 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:37.124 20:27:34 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:37.124 20:27:34 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1225613' 00:32:37.124 killing process with pid 1225613 00:32:37.124 20:27:34 keyring_file -- common/autotest_common.sh@967 -- # kill 1225613 00:32:37.124 Received shutdown signal, test time was about 1.000000 seconds 00:32:37.124 00:32:37.124 Latency(us) 00:32:37.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.124 =================================================================================================================== 00:32:37.124 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:37.124 20:27:34 keyring_file -- common/autotest_common.sh@972 -- # wait 1225613 00:32:37.124 20:27:34 keyring_file -- keyring/file.sh@21 -- # killprocess 1223831 00:32:37.124 20:27:34 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1223831 ']' 00:32:37.124 20:27:34 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1223831 00:32:37.124 20:27:34 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:37.124 20:27:34 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:37.124 20:27:34 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1223831 00:32:37.385 20:27:34 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:37.385 20:27:34 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:37.385 20:27:34 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1223831' 00:32:37.385 killing process with pid 1223831 00:32:37.385 20:27:34 keyring_file -- common/autotest_common.sh@967 -- # kill 1223831 00:32:37.385 [2024-07-15 20:27:34.598431] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:37.385 20:27:34 keyring_file -- common/autotest_common.sh@972 -- # wait 1223831 00:32:37.385 00:32:37.385 real 0m11.058s 00:32:37.385 user 0m25.778s 00:32:37.385 sys 0m2.624s 00:32:37.385 20:27:34 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:37.385 20:27:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:37.647 ************************************ 00:32:37.647 END TEST keyring_file 00:32:37.647 ************************************ 00:32:37.647 20:27:34 -- common/autotest_common.sh@1142 -- # return 0 00:32:37.647 20:27:34 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:37.647 20:27:34 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:37.647 20:27:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:37.647 20:27:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:37.647 20:27:34 -- common/autotest_common.sh@10 -- # set +x 00:32:37.647 ************************************ 00:32:37.647 START TEST keyring_linux 00:32:37.647 ************************************ 00:32:37.647 20:27:34 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:37.647 * Looking for test storage... 00:32:37.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:37.647 20:27:34 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:37.647 20:27:34 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:37.647 20:27:34 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:37.647 20:27:34 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:37.647 20:27:34 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:37.647 20:27:34 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:37.647 20:27:34 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:37.647 20:27:34 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:37.647 20:27:34 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:37.647 20:27:34 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:37.647 20:27:34 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:37.647 20:27:34 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:37.647 20:27:34 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:37.647 20:27:35 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:37.647 20:27:35 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:37.647 20:27:35 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:37.647 20:27:35 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:37.647 20:27:35 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:37.647 20:27:35 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:37.647 20:27:35 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:37.647 20:27:35 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:37.647 20:27:35 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:37.647 20:27:35 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:37.647 20:27:35 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.647 20:27:35 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.648 20:27:35 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.648 20:27:35 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:37.648 20:27:35 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:37.648 20:27:35 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:37.648 20:27:35 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:37.648 20:27:35 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:37.648 20:27:35 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:37.648 20:27:35 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:37.648 20:27:35 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:37.648 20:27:35 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:37.648 20:27:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:37.648 20:27:35 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:37.648 20:27:35 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:37.648 20:27:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:37.648 20:27:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:37.648 20:27:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:37.648 20:27:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:37.648 20:27:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:37.648 /tmp/:spdk-test:key0 00:32:37.648 20:27:35 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:37.648 20:27:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:37.648 20:27:35 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:37.648 20:27:35 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:37.648 20:27:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:37.648 20:27:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:37.648 20:27:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:37.648 20:27:35 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:37.909 20:27:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:37.909 20:27:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:37.909 /tmp/:spdk-test:key1 00:32:37.909 20:27:35 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:37.909 20:27:35 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1226192 00:32:37.909 20:27:35 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1226192 00:32:37.909 20:27:35 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1226192 ']' 00:32:37.909 20:27:35 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:37.909 20:27:35 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:37.909 20:27:35 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:37.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:37.909 20:27:35 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:37.909 20:27:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:37.909 [2024-07-15 20:27:35.158073] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:32:37.909 [2024-07-15 20:27:35.158164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226192 ] 00:32:37.909 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.909 [2024-07-15 20:27:35.221700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.909 [2024-07-15 20:27:35.295698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.852 20:27:35 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:38.852 20:27:35 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:38.852 20:27:35 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:38.852 20:27:35 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.852 20:27:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:38.852 [2024-07-15 20:27:35.935895] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:38.852 null0 00:32:38.852 [2024-07-15 20:27:35.967943] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:38.852 [2024-07-15 20:27:35.968331] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:38.852 20:27:35 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.852 20:27:35 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:38.852 211179249 00:32:38.852 20:27:35 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:38.852 528041886 00:32:38.852 20:27:35 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1226230 00:32:38.852 20:27:35 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1226230 /var/tmp/bperf.sock 00:32:38.852 20:27:35 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:38.852 20:27:35 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1226230 ']' 00:32:38.852 20:27:35 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:38.852 20:27:35 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:38.852 20:27:35 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:38.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:38.852 20:27:35 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:38.852 20:27:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:38.852 [2024-07-15 20:27:36.042286] Starting SPDK v24.09-pre git sha1 35c1e586c / DPDK 24.03.0 initialization... 00:32:38.852 [2024-07-15 20:27:36.042332] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226230 ] 00:32:38.852 EAL: No free 2048 kB hugepages reported on node 1 00:32:38.852 [2024-07-15 20:27:36.116284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.852 [2024-07-15 20:27:36.169940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.426 20:27:36 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:39.426 20:27:36 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:39.426 20:27:36 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:39.426 20:27:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:39.687 20:27:36 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:39.687 20:27:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:39.948 20:27:37 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:39.948 20:27:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:39.948 [2024-07-15 20:27:37.260385] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:39.948 nvme0n1 00:32:39.948 20:27:37 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:39.948 20:27:37 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:39.948 20:27:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:39.948 20:27:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:39.948 20:27:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:39.948 20:27:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:40.209 20:27:37 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:40.209 20:27:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:40.209 20:27:37 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:40.209 20:27:37 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:40.209 20:27:37 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:40.209 20:27:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:40.209 20:27:37 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:40.471 20:27:37 keyring_linux -- keyring/linux.sh@25 -- # sn=211179249 00:32:40.471 20:27:37 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:40.471 20:27:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:40.471 20:27:37 keyring_linux -- keyring/linux.sh@26 -- # [[ 211179249 == \2\1\1\1\7\9\2\4\9 ]] 00:32:40.471 20:27:37 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 211179249 00:32:40.471 20:27:37 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:40.472 20:27:37 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:40.472 Running I/O for 1 seconds... 00:32:41.414 00:32:41.414 Latency(us) 00:32:41.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.414 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:41.414 nvme0n1 : 1.01 9651.53 37.70 0.00 0.00 13193.83 3181.23 16384.00 00:32:41.414 =================================================================================================================== 00:32:41.414 Total : 9651.53 37.70 0.00 0.00 13193.83 3181.23 16384.00 00:32:41.414 0 00:32:41.414 20:27:38 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:41.414 20:27:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:41.675 20:27:38 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:41.675 20:27:38 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:41.675 20:27:38 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:41.675 20:27:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:41.675 20:27:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:41.675 20:27:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:41.935 20:27:39 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:41.935 20:27:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:41.935 20:27:39 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:41.936 20:27:39 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:41.936 20:27:39 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:32:41.936 20:27:39 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:41.936 20:27:39 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:41.936 20:27:39 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:41.936 20:27:39 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:41.936 20:27:39 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:41.936 20:27:39 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:41.936 20:27:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:41.936 [2024-07-15 20:27:39.280593] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:41.936 [2024-07-15 20:27:39.281326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117c950 (107): Transport endpoint is not connected 00:32:41.936 [2024-07-15 20:27:39.282322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117c950 (9): Bad file descriptor 00:32:41.936 [2024-07-15 20:27:39.283324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:41.936 [2024-07-15 20:27:39.283330] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:41.936 [2024-07-15 20:27:39.283335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:41.936 request: 00:32:41.936 { 00:32:41.936 "name": "nvme0", 00:32:41.936 "trtype": "tcp", 00:32:41.936 "traddr": "127.0.0.1", 00:32:41.936 "adrfam": "ipv4", 00:32:41.936 "trsvcid": "4420", 00:32:41.936 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:41.936 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:41.936 "prchk_reftag": false, 00:32:41.936 "prchk_guard": false, 00:32:41.936 "hdgst": false, 00:32:41.936 "ddgst": false, 00:32:41.936 "psk": ":spdk-test:key1", 00:32:41.936 "method": "bdev_nvme_attach_controller", 00:32:41.936 "req_id": 1 00:32:41.936 } 00:32:41.936 Got JSON-RPC error response 00:32:41.936 response: 00:32:41.936 { 00:32:41.936 "code": -5, 00:32:41.936 "message": "Input/output error" 00:32:41.936 } 00:32:41.936 20:27:39 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:32:41.936 20:27:39 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:41.936 20:27:39 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:41.936 20:27:39 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:41.936 20:27:39 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:41.936 20:27:39 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:41.936 20:27:39 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:41.936 20:27:39 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:41.936 20:27:39 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:41.936 20:27:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:41.936 20:27:39 keyring_linux -- keyring/linux.sh@33 -- # sn=211179249 00:32:41.936 20:27:39 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 211179249 00:32:41.936 1 links removed 00:32:41.936 20:27:39 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:41.936 20:27:39 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:41.936 20:27:39 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:41.936 20:27:39 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:41.936 20:27:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:41.936 20:27:39 keyring_linux -- keyring/linux.sh@33 -- # sn=528041886 00:32:41.936 20:27:39 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 528041886 00:32:41.936 1 links removed 00:32:41.936 20:27:39 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1226230 00:32:41.936 20:27:39 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1226230 ']' 00:32:41.936 20:27:39 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1226230 00:32:41.936 20:27:39 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:41.936 20:27:39 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:41.936 20:27:39 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1226230 00:32:42.197 20:27:39 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:42.197 20:27:39 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:42.197 20:27:39 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1226230' 00:32:42.197 killing process with pid 1226230 00:32:42.197 20:27:39 keyring_linux -- common/autotest_common.sh@967 -- # kill 1226230 00:32:42.197 Received shutdown signal, test time was about 1.000000 seconds 00:32:42.197 00:32:42.197 Latency(us) 00:32:42.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.197 =================================================================================================================== 00:32:42.197 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:42.197 20:27:39 keyring_linux -- common/autotest_common.sh@972 -- # wait 1226230 00:32:42.197 20:27:39 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1226192 00:32:42.197 20:27:39 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1226192 ']' 00:32:42.197 20:27:39 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1226192 00:32:42.197 20:27:39 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:42.197 20:27:39 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:42.197 20:27:39 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1226192 00:32:42.197 20:27:39 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:42.197 20:27:39 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:42.197 20:27:39 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1226192' 00:32:42.197 killing process with pid 1226192 00:32:42.197 20:27:39 keyring_linux -- common/autotest_common.sh@967 -- # kill 1226192 00:32:42.197 20:27:39 keyring_linux -- common/autotest_common.sh@972 -- # wait 1226192 00:32:42.458 00:32:42.459 real 0m4.856s 00:32:42.459 user 0m8.285s 00:32:42.459 sys 0m1.291s 00:32:42.459 20:27:39 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:42.459 20:27:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:42.459 ************************************ 00:32:42.459 END TEST keyring_linux 00:32:42.459 ************************************ 00:32:42.459 20:27:39 -- common/autotest_common.sh@1142 -- # return 0 00:32:42.459 20:27:39 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:42.459 20:27:39 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:42.459 20:27:39 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:42.459 20:27:39 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:32:42.459 20:27:39 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:32:42.459 20:27:39 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:42.459 20:27:39 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:42.459 20:27:39 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:42.459 20:27:39 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:42.459 20:27:39 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:42.459 20:27:39 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:42.459 20:27:39 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:42.459 20:27:39 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:42.459 20:27:39 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:42.459 20:27:39 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:42.459 20:27:39 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:42.459 20:27:39 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:42.459 20:27:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:42.459 20:27:39 -- common/autotest_common.sh@10 -- # set +x 00:32:42.459 20:27:39 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:42.459 20:27:39 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:42.459 20:27:39 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:42.459 20:27:39 -- common/autotest_common.sh@10 -- # set +x 00:32:50.608 INFO: APP EXITING 00:32:50.608 INFO: killing all VMs 00:32:50.608 INFO: killing vhost app 00:32:50.608 WARN: no vhost pid file found 00:32:50.608 INFO: EXIT DONE 00:32:53.914 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:32:53.914 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:32:53.914 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:32:53.914 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:32:53.914 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:32:53.914 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:32:53.914 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:32:53.914 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:32:53.914 0000:65:00.0 (144d a80a): Already using the nvme driver 00:32:53.914 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:32:53.914 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:32:53.914 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:32:53.914 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:32:53.914 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:32:53.914 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:32:53.914 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:32:53.914 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:32:57.219 Cleaning 00:32:57.219 Removing: /var/run/dpdk/spdk0/config 00:32:57.219 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:57.219 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:57.219 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:57.219 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:57.219 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:57.219 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:57.219 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:57.219 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:57.219 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:57.219 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:57.219 Removing: /var/run/dpdk/spdk1/config 00:32:57.219 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:57.219 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:57.219 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:57.219 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:57.219 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:57.219 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:57.219 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:57.219 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:57.219 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:57.219 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:57.219 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:57.481 Removing: /var/run/dpdk/spdk2/config 00:32:57.481 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:57.481 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:57.481 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:57.481 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:57.481 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:57.481 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:57.481 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:57.481 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:57.481 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:57.481 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:57.481 Removing: /var/run/dpdk/spdk3/config 00:32:57.481 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:57.481 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:57.481 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:57.481 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:57.481 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:57.481 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:57.481 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:57.481 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:57.481 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:57.481 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:57.481 Removing: /var/run/dpdk/spdk4/config 00:32:57.481 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:57.481 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:57.481 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:57.481 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:57.481 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:57.481 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:57.481 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:57.481 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:57.481 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:57.481 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:57.481 Removing: /dev/shm/bdev_svc_trace.1 00:32:57.481 Removing: /dev/shm/nvmf_trace.0 00:32:57.481 Removing: /dev/shm/spdk_tgt_trace.pid769381 00:32:57.481 Removing: /var/run/dpdk/spdk0 00:32:57.481 Removing: /var/run/dpdk/spdk1 00:32:57.481 Removing: /var/run/dpdk/spdk2 00:32:57.481 Removing: /var/run/dpdk/spdk3 00:32:57.481 Removing: /var/run/dpdk/spdk4 00:32:57.481 Removing: /var/run/dpdk/spdk_pid1013786 00:32:57.481 Removing: /var/run/dpdk/spdk_pid1019633 00:32:57.481 Removing: /var/run/dpdk/spdk_pid1021629 00:32:57.481 Removing: /var/run/dpdk/spdk_pid1023702 00:32:57.481 Removing: /var/run/dpdk/spdk_pid1023992 00:32:57.481 Removing: /var/run/dpdk/spdk_pid1024324 00:32:57.481 Removing: /var/run/dpdk/spdk_pid1024386 00:32:57.481 Removing: /var/run/dpdk/spdk_pid1025066 00:32:57.481 Removing: /var/run/dpdk/spdk_pid1027134 00:32:57.481 Removing: /var/run/dpdk/spdk_pid1028162 00:32:57.481 Removing: /var/run/dpdk/spdk_pid1028702 00:32:57.481 Removing: /var/run/dpdk/spdk_pid1031237 00:32:57.481 Removing: /var/run/dpdk/spdk_pid1031943 00:32:57.481 Removing: /var/run/dpdk/spdk_pid1032792 00:32:57.481 Removing: /var/run/dpdk/spdk_pid1037706 00:32:57.481 Removing: /var/run/dpdk/spdk_pid1049863 00:32:57.481 Removing: /var/run/dpdk/spdk_pid1054761 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1062083 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1064038 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1065820 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1070958 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1075800 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1084724 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1084726 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1089781 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1090110 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1090223 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1090785 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1090792 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1096165 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1096947 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1102156 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1105500 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1111872 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1118310 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1128560 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1137020 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1137056 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1159395 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1160173 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1161017 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1161729 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1162782 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1163477 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1164165 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1164842 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1169989 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1170320 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1177822 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1178049 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1180711 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1187815 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1187823 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1193689 00:32:57.742 Removing: /var/run/dpdk/spdk_pid1196114 00:32:57.743 Removing: /var/run/dpdk/spdk_pid1198404 00:32:57.743 Removing: /var/run/dpdk/spdk_pid1199759 00:32:57.743 Removing: /var/run/dpdk/spdk_pid1202106 00:32:57.743 Removing: /var/run/dpdk/spdk_pid1203631 00:32:57.743 Removing: /var/run/dpdk/spdk_pid1213274 00:32:57.743 Removing: /var/run/dpdk/spdk_pid1213902 00:32:57.743 Removing: /var/run/dpdk/spdk_pid1214570 00:32:57.743 Removing: /var/run/dpdk/spdk_pid1217600 00:32:57.743 Removing: /var/run/dpdk/spdk_pid1218056 00:32:57.743 Removing: /var/run/dpdk/spdk_pid1218640 00:32:57.743 Removing: /var/run/dpdk/spdk_pid1223831 00:32:57.743 Removing: /var/run/dpdk/spdk_pid1223981 00:32:57.743 Removing: /var/run/dpdk/spdk_pid1225613 00:32:57.743 Removing: /var/run/dpdk/spdk_pid1226192 00:32:57.743 Removing: /var/run/dpdk/spdk_pid1226230 00:32:57.743 Removing: /var/run/dpdk/spdk_pid767855 00:32:57.743 Removing: /var/run/dpdk/spdk_pid769381 00:32:57.743 Removing: /var/run/dpdk/spdk_pid769942 00:32:57.743 Removing: /var/run/dpdk/spdk_pid771131 00:32:57.743 Removing: /var/run/dpdk/spdk_pid771288 00:32:57.743 Removing: /var/run/dpdk/spdk_pid772566 00:32:58.004 Removing: /var/run/dpdk/spdk_pid772677 00:32:58.004 Removing: /var/run/dpdk/spdk_pid773046 00:32:58.004 Removing: /var/run/dpdk/spdk_pid773955 00:32:58.004 Removing: /var/run/dpdk/spdk_pid774709 00:32:58.004 Removing: /var/run/dpdk/spdk_pid775092 00:32:58.004 Removing: /var/run/dpdk/spdk_pid775443 00:32:58.004 Removing: /var/run/dpdk/spdk_pid775732 00:32:58.004 Removing: /var/run/dpdk/spdk_pid775982 00:32:58.004 Removing: /var/run/dpdk/spdk_pid776311 00:32:58.004 Removing: /var/run/dpdk/spdk_pid776663 00:32:58.004 Removing: /var/run/dpdk/spdk_pid777044 00:32:58.004 Removing: /var/run/dpdk/spdk_pid778112 00:32:58.004 Removing: /var/run/dpdk/spdk_pid781431 00:32:58.004 Removing: /var/run/dpdk/spdk_pid781732 00:32:58.004 Removing: /var/run/dpdk/spdk_pid782100 00:32:58.004 Removing: /var/run/dpdk/spdk_pid782344 00:32:58.004 Removing: /var/run/dpdk/spdk_pid782805 00:32:58.004 Removing: /var/run/dpdk/spdk_pid782835 00:32:58.004 Removing: /var/run/dpdk/spdk_pid783466 00:32:58.004 Removing: /var/run/dpdk/spdk_pid783522 00:32:58.004 Removing: /var/run/dpdk/spdk_pid783890 00:32:58.004 Removing: /var/run/dpdk/spdk_pid783967 00:32:58.004 Removing: /var/run/dpdk/spdk_pid784257 00:32:58.004 Removing: /var/run/dpdk/spdk_pid784423 00:32:58.004 Removing: /var/run/dpdk/spdk_pid785018 00:32:58.004 Removing: /var/run/dpdk/spdk_pid785166 00:32:58.004 Removing: /var/run/dpdk/spdk_pid785460 00:32:58.004 Removing: /var/run/dpdk/spdk_pid785828 00:32:58.004 Removing: /var/run/dpdk/spdk_pid785853 00:32:58.004 Removing: /var/run/dpdk/spdk_pid786124 00:32:58.004 Removing: /var/run/dpdk/spdk_pid786314 00:32:58.004 Removing: /var/run/dpdk/spdk_pid786623 00:32:58.004 Removing: /var/run/dpdk/spdk_pid786970 00:32:58.004 Removing: /var/run/dpdk/spdk_pid787319 00:32:58.004 Removing: /var/run/dpdk/spdk_pid787589 00:32:58.004 Removing: /var/run/dpdk/spdk_pid787783 00:32:58.004 Removing: /var/run/dpdk/spdk_pid788058 00:32:58.004 Removing: /var/run/dpdk/spdk_pid788413 00:32:58.004 Removing: /var/run/dpdk/spdk_pid788760 00:32:58.004 Removing: /var/run/dpdk/spdk_pid789076 00:32:58.004 Removing: /var/run/dpdk/spdk_pid789268 00:32:58.004 Removing: /var/run/dpdk/spdk_pid789499 00:32:58.004 Removing: /var/run/dpdk/spdk_pid789852 00:32:58.004 Removing: /var/run/dpdk/spdk_pid790206 00:32:58.004 Removing: /var/run/dpdk/spdk_pid790548 00:32:58.004 Removing: /var/run/dpdk/spdk_pid790753 00:32:58.004 Removing: /var/run/dpdk/spdk_pid790965 00:32:58.004 Removing: /var/run/dpdk/spdk_pid791301 00:32:58.004 Removing: /var/run/dpdk/spdk_pid791650 00:32:58.004 Removing: /var/run/dpdk/spdk_pid792004 00:32:58.004 Removing: /var/run/dpdk/spdk_pid792073 00:32:58.004 Removing: /var/run/dpdk/spdk_pid792483 00:32:58.004 Removing: /var/run/dpdk/spdk_pid796938 00:32:58.004 Removing: /var/run/dpdk/spdk_pid850242 00:32:58.004 Removing: /var/run/dpdk/spdk_pid855390 00:32:58.004 Removing: /var/run/dpdk/spdk_pid867227 00:32:58.004 Removing: /var/run/dpdk/spdk_pid874068 00:32:58.004 Removing: /var/run/dpdk/spdk_pid879082 00:32:58.004 Removing: /var/run/dpdk/spdk_pid879763 00:32:58.326 Removing: /var/run/dpdk/spdk_pid886936 00:32:58.327 Removing: /var/run/dpdk/spdk_pid894139 00:32:58.327 Removing: /var/run/dpdk/spdk_pid894145 00:32:58.327 Removing: /var/run/dpdk/spdk_pid895147 00:32:58.327 Removing: /var/run/dpdk/spdk_pid896151 00:32:58.327 Removing: /var/run/dpdk/spdk_pid897160 00:32:58.327 Removing: /var/run/dpdk/spdk_pid897830 00:32:58.327 Removing: /var/run/dpdk/spdk_pid897840 00:32:58.327 Removing: /var/run/dpdk/spdk_pid898173 00:32:58.327 Removing: /var/run/dpdk/spdk_pid898185 00:32:58.327 Removing: /var/run/dpdk/spdk_pid898283 00:32:58.327 Removing: /var/run/dpdk/spdk_pid899361 00:32:58.327 Removing: /var/run/dpdk/spdk_pid900382 00:32:58.327 Removing: /var/run/dpdk/spdk_pid901493 00:32:58.327 Removing: /var/run/dpdk/spdk_pid902095 00:32:58.327 Removing: /var/run/dpdk/spdk_pid902199 00:32:58.327 Removing: /var/run/dpdk/spdk_pid902483 00:32:58.327 Removing: /var/run/dpdk/spdk_pid903687 00:32:58.327 Removing: /var/run/dpdk/spdk_pid905046 00:32:58.327 Removing: /var/run/dpdk/spdk_pid915196 00:32:58.327 Removing: /var/run/dpdk/spdk_pid915991 00:32:58.327 Removing: /var/run/dpdk/spdk_pid921074 00:32:58.327 Removing: /var/run/dpdk/spdk_pid928062 00:32:58.327 Removing: /var/run/dpdk/spdk_pid931075 00:32:58.327 Removing: /var/run/dpdk/spdk_pid943085 00:32:58.327 Removing: /var/run/dpdk/spdk_pid953732 00:32:58.327 Removing: /var/run/dpdk/spdk_pid955834 00:32:58.327 Removing: /var/run/dpdk/spdk_pid957017 00:32:58.327 Removing: /var/run/dpdk/spdk_pid977861 00:32:58.327 Removing: /var/run/dpdk/spdk_pid982329 00:32:58.327 Clean 00:32:58.327 20:27:55 -- common/autotest_common.sh@1451 -- # return 0 00:32:58.327 20:27:55 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:58.327 20:27:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:58.327 20:27:55 -- common/autotest_common.sh@10 -- # set +x 00:32:58.327 20:27:55 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:58.327 20:27:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:58.327 20:27:55 -- common/autotest_common.sh@10 -- # set +x 00:32:58.327 20:27:55 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:58.327 20:27:55 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:58.327 20:27:55 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:58.327 20:27:55 -- spdk/autotest.sh@391 -- # hash lcov 00:32:58.327 20:27:55 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:58.327 20:27:55 -- spdk/autotest.sh@393 -- # hostname 00:32:58.327 20:27:55 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:58.592 geninfo: WARNING: invalid characters removed from testname! 00:33:25.176 20:28:20 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:25.176 20:28:22 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:27.091 20:28:24 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:29.032 20:28:26 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:30.952 20:28:27 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:32.336 20:28:29 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:33.721 20:28:31 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:33.984 20:28:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:33.984 20:28:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:33.984 20:28:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:33.984 20:28:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:33.984 20:28:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.984 20:28:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.984 20:28:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.984 20:28:31 -- paths/export.sh@5 -- $ export PATH 00:33:33.984 20:28:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.984 20:28:31 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:33.984 20:28:31 -- common/autobuild_common.sh@444 -- $ date +%s 00:33:33.984 20:28:31 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721068111.XXXXXX 00:33:33.984 20:28:31 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721068111.5VkqjY 00:33:33.984 20:28:31 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:33:33.984 20:28:31 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:33:33.984 20:28:31 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:33.984 20:28:31 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:33.984 20:28:31 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:33.984 20:28:31 -- common/autobuild_common.sh@460 -- $ get_config_params 00:33:33.984 20:28:31 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:33.984 20:28:31 -- common/autotest_common.sh@10 -- $ set +x 00:33:33.984 20:28:31 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:33.984 20:28:31 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:33:33.984 20:28:31 -- pm/common@17 -- $ local monitor 00:33:33.984 20:28:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:33.984 20:28:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:33.984 20:28:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:33.984 20:28:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:33.984 20:28:31 -- pm/common@21 -- $ date +%s 00:33:33.984 20:28:31 -- pm/common@25 -- $ sleep 1 00:33:33.984 20:28:31 -- pm/common@21 -- $ date +%s 00:33:33.984 20:28:31 -- pm/common@21 -- $ date +%s 00:33:33.984 20:28:31 -- pm/common@21 -- $ date +%s 00:33:33.984 20:28:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721068111 00:33:33.984 20:28:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721068111 00:33:33.984 20:28:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721068111 00:33:33.984 20:28:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721068111 00:33:33.984 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721068111_collect-vmstat.pm.log 00:33:33.984 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721068111_collect-cpu-load.pm.log 00:33:33.984 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721068111_collect-cpu-temp.pm.log 00:33:33.984 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721068111_collect-bmc-pm.bmc.pm.log 00:33:34.926 20:28:32 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:33:34.926 20:28:32 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:33:34.926 20:28:32 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:34.926 20:28:32 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:34.926 20:28:32 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:34.926 20:28:32 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:34.926 20:28:32 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:34.926 20:28:32 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:34.926 20:28:32 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:34.926 20:28:32 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:34.926 20:28:32 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:34.926 20:28:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:34.926 20:28:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:34.926 20:28:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:34.926 20:28:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:34.926 20:28:32 -- pm/common@44 -- $ pid=1238689 00:33:34.926 20:28:32 -- pm/common@50 -- $ kill -TERM 1238689 00:33:34.926 20:28:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:34.926 20:28:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:34.926 20:28:32 -- pm/common@44 -- $ pid=1238690 00:33:34.926 20:28:32 -- pm/common@50 -- $ kill -TERM 1238690 00:33:34.926 20:28:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:34.926 20:28:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:34.926 20:28:32 -- pm/common@44 -- $ pid=1238692 00:33:34.926 20:28:32 -- pm/common@50 -- $ kill -TERM 1238692 00:33:34.926 20:28:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:34.927 20:28:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:34.927 20:28:32 -- pm/common@44 -- $ pid=1238716 00:33:34.927 20:28:32 -- pm/common@50 -- $ sudo -E kill -TERM 1238716 00:33:34.927 + [[ -n 647391 ]] 00:33:34.927 + sudo kill 647391 00:33:35.197 [Pipeline] } 00:33:35.216 [Pipeline] // stage 00:33:35.220 [Pipeline] } 00:33:35.239 [Pipeline] // timeout 00:33:35.242 [Pipeline] } 00:33:35.257 [Pipeline] // catchError 00:33:35.262 [Pipeline] } 00:33:35.279 [Pipeline] // wrap 00:33:35.285 [Pipeline] } 00:33:35.303 [Pipeline] // catchError 00:33:35.312 [Pipeline] stage 00:33:35.314 [Pipeline] { (Epilogue) 00:33:35.329 [Pipeline] catchError 00:33:35.330 [Pipeline] { 00:33:35.345 [Pipeline] echo 00:33:35.346 Cleanup processes 00:33:35.354 [Pipeline] sh 00:33:35.640 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:35.640 1238795 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:35.640 1239240 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:35.657 [Pipeline] sh 00:33:35.995 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:35.995 ++ grep -v 'sudo pgrep' 00:33:35.995 ++ awk '{print $1}' 00:33:35.995 + sudo kill -9 1238795 00:33:36.006 [Pipeline] sh 00:33:36.286 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:46.290 [Pipeline] sh 00:33:46.577 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:46.577 Artifacts sizes are good 00:33:46.592 [Pipeline] archiveArtifacts 00:33:46.599 Archiving artifacts 00:33:46.791 [Pipeline] sh 00:33:47.078 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:47.095 [Pipeline] cleanWs 00:33:47.106 [WS-CLEANUP] Deleting project workspace... 00:33:47.106 [WS-CLEANUP] Deferred wipeout is used... 00:33:47.114 [WS-CLEANUP] done 00:33:47.115 [Pipeline] } 00:33:47.138 [Pipeline] // catchError 00:33:47.152 [Pipeline] sh 00:33:47.443 + logger -p user.info -t JENKINS-CI 00:33:47.454 [Pipeline] } 00:33:47.471 [Pipeline] // stage 00:33:47.476 [Pipeline] } 00:33:47.495 [Pipeline] // node 00:33:47.501 [Pipeline] End of Pipeline 00:33:47.541 Finished: SUCCESS